<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://tsgdoc.socsci.ru.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=P.dewater</id>
	<title>TSG Doc - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://tsgdoc.socsci.ru.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=P.dewater"/>
	<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php/Special:Contributions/P.dewater"/>
	<updated>2026-04-26T18:21:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.4</generator>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Ipod&amp;diff=6486</id>
		<title>Ipod</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Ipod&amp;diff=6486"/>
		<updated>2026-04-22T10:19:03Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Removing data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tsg&lt;br /&gt;
| name           = Ipod&lt;br /&gt;
| image          = ipod.jpg&lt;br /&gt;
| caption        = Apple iPod touch&lt;br /&gt;
| manuals        = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
iPods are used for basic video or voice recording. Because a passcode is required to access the iPod, it provides protection for privacy-sensitive data. iPods are only meant for video and audio recording as described in Usage section. It is not allowed to use iPods for personal purposes, such as emails.  iPods should never be connected to the Internet, Wi-Fi or Bluetooth.  &lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
The iPod touch featured a 4-inch diagonal widescreen multi-touch display with a resolution of 1136 x 640. The back-facing camera is 8 MP, capable of recording video in 1080p resolution at 25 fps, 30 fps, or 60 fps, and slow-motion video in 720p at 120 fps. This camera also features auto HDR for video recordings with supported codecs HEVC and H.264. There is a Microphone placed at the back of the ipod for voice memo recordings.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
===Recording===&lt;br /&gt;
====Voice recorder====&lt;br /&gt;
From the lockscreen, swipe up to access the voice recorder. Passcode is required.&lt;br /&gt;
&lt;br /&gt;
[[image:Voicerecorder.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
====Video recorder====&lt;br /&gt;
From the lockscreen, swipe up to access the video recorder. Passcode is not required.&lt;br /&gt;
&lt;br /&gt;
[[image:videorecorder.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
===Retrieving Data===&lt;br /&gt;
====Voice recorder====&lt;br /&gt;
1. Share your recordings to Phone Drive. Do not share them through email or AirDrop. &lt;br /&gt;
&lt;br /&gt;
[[image:voicerecorderr.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
2. Connect the iPod to the computer with the usb cable. &lt;br /&gt;
Download iTunes (contact the ICT department if you have no rights to install iTunes on your work laptop).  &lt;br /&gt;
Start iTunes .&lt;br /&gt;
&lt;br /&gt;
[[image:itunes.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
3. Delete the files from the &amp;quot;Phone Drive&amp;quot; app on your iPod.&lt;br /&gt;
&lt;br /&gt;
- Open Phone Drive and navigate to the file.&lt;br /&gt;
&lt;br /&gt;
- Swipe left on the file name.&lt;br /&gt;
&lt;br /&gt;
- A red Delete button should appear, tap it to delete.&lt;br /&gt;
&lt;br /&gt;
====Video recorder==== &lt;br /&gt;
Connect the iPod to the computer with the usb cable.&lt;br /&gt;
&lt;br /&gt;
[[image:videorecorderr.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
In File Explorer, find your Apple iPod and go to: \Apple iPod\Internal Storage\DCIM&lt;br /&gt;
&lt;br /&gt;
===Removing data=== &lt;br /&gt;
To prevent data leaks, it is important that the data is completely removed from the iPod before the next person uses it.&lt;br /&gt;
&lt;br /&gt;
====Delete Voice Memos====&lt;br /&gt;
[[image:deleteIpodvoice.jpg| 750px ]]&lt;br /&gt;
&lt;br /&gt;
====Delete videos/pictures====&lt;br /&gt;
[[image:deleteIpod.jpg| 900px ]]&lt;br /&gt;
&lt;br /&gt;
====Delete Phone Drive Data====&lt;br /&gt;
&lt;br /&gt;
- Open Phone Drive and navigate to the file.&lt;br /&gt;
&lt;br /&gt;
- Swipe left on the file name.&lt;br /&gt;
&lt;br /&gt;
- A red Delete button should appear, tap it to delete.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
[[Cameras]]&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Ipod&amp;diff=6485</id>
		<title>Ipod</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Ipod&amp;diff=6485"/>
		<updated>2026-04-22T10:13:01Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tsg&lt;br /&gt;
| name           = Ipod&lt;br /&gt;
| image          = ipod.jpg&lt;br /&gt;
| caption        = Apple iPod touch&lt;br /&gt;
| manuals        = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
iPods are used for basic video or voice recording. Because a passcode is required to access the iPod, it provides protection for privacy-sensitive data. iPods are only meant for video and audio recording as described in Usage section. It is not allowed to use iPods for personal purposes, such as emails.  iPods should never be connected to the Internet, Wi-Fi or Bluetooth.  &lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
The iPod touch featured a 4-inch diagonal widescreen multi-touch display with a resolution of 1136 x 640. The back-facing camera is 8 MP, capable of recording video in 1080p resolution at 25 fps, 30 fps, or 60 fps, and slow-motion video in 720p at 120 fps. This camera also features auto HDR for video recordings with supported codecs HEVC and H.264. There is a Microphone placed at the back of the ipod for voice memo recordings.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
===Recording===&lt;br /&gt;
====Voice recorder====&lt;br /&gt;
From the lockscreen, swipe up to access the voice recorder. Passcode is required.&lt;br /&gt;
&lt;br /&gt;
[[image:Voicerecorder.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
====Video recorder====&lt;br /&gt;
From the lockscreen, swipe up to access the video recorder. Passcode is not required.&lt;br /&gt;
&lt;br /&gt;
[[image:videorecorder.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
===Retrieving Data===&lt;br /&gt;
====Voice recorder====&lt;br /&gt;
1. Share your recordings to Phone Drive. Do not share them through email or AirDrop. &lt;br /&gt;
&lt;br /&gt;
[[image:voicerecorderr.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
2. Connect the iPod to the computer with the usb cable. &lt;br /&gt;
Download iTunes (contact the ICT department if you have no rights to install iTunes on your work laptop).  &lt;br /&gt;
Start iTunes .&lt;br /&gt;
&lt;br /&gt;
[[image:itunes.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
3. Delete the files from the &amp;quot;Phone Drive&amp;quot; app on your iPod.&lt;br /&gt;
&lt;br /&gt;
- Open Phone Drive and navigate to the file.&lt;br /&gt;
&lt;br /&gt;
- Swipe left on the file name.&lt;br /&gt;
&lt;br /&gt;
- A red Delete button should appear, tap it to delete.&lt;br /&gt;
&lt;br /&gt;
====Video recorder==== &lt;br /&gt;
Connect the iPod to the computer with the usb cable.&lt;br /&gt;
&lt;br /&gt;
[[image:videorecorderr.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
In File Explorer, find your Apple iPod and go to: \Apple iPod\Internal Storage\DCIM&lt;br /&gt;
&lt;br /&gt;
===Removing data=== &lt;br /&gt;
To prevent data leaks, it is important that the data is completely removed from the iPod before the next person uses it.&lt;br /&gt;
&lt;br /&gt;
====Delete Voice Memos====&lt;br /&gt;
[[image:deleteIpodvoice.jpg| 750px ]]&lt;br /&gt;
&lt;br /&gt;
====Delete videos/pictures====&lt;br /&gt;
[[image:deleteIpod.jpg| 900px ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
[[Cameras]]&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Ipod&amp;diff=6484</id>
		<title>Ipod</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Ipod&amp;diff=6484"/>
		<updated>2026-04-22T10:11:58Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Voice recorder */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tsg&lt;br /&gt;
| name           = Ipod&lt;br /&gt;
| image          = ipod.jpg&lt;br /&gt;
| caption        = Apple iPod touch&lt;br /&gt;
| manuals        = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
iPods are used for basic video or voice recording. Because a passcode is required to access the iPod, it provides protection for privacy-sensitive data. iPods are only meant for video and audio recording as described in Usage section. It is not allowed to use iPods for personal purposes, such as emails.  iPods should never be connected to the Internet, Wi-Fi or Bluetooth.  &lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
The iPod touch featured a 4-inch diagonal widescreen multi-touch display with a resolution of 1136 x 640. The back-facing camera is 8 MP, capable of recording video in 1080p resolution at 25 fps, 30 fps, or 60 fps, and slow-motion video in 720p at 120 fps. This camera also features auto HDR for video recordings with supported codecs HEVC and H.264. There is a Microphone placed at the back of the ipod for voice memo recordings.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
===Recording===&lt;br /&gt;
====Voice recorder====&lt;br /&gt;
From the lockscreen, swipe up to access the voice recorder. Passcode is required.&lt;br /&gt;
&lt;br /&gt;
[[image:Voicerecorder.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
====Video recorder====&lt;br /&gt;
From the lockscreen, swipe up to access the video recorder. Passcode is not required.&lt;br /&gt;
&lt;br /&gt;
[[image:videorecorder.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
===Retrieving Data===&lt;br /&gt;
====Voice recorder====&lt;br /&gt;
1. Share your recordings to Phone Drive. Do not share them through email or AirDrop. &lt;br /&gt;
&lt;br /&gt;
[[image:voicerecorderr.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
2. Connect the iPod to the computer with the usb cable. &lt;br /&gt;
Download iTunes (contact the ICT department if you have no rights to install iTunes on your work laptop).  &lt;br /&gt;
Start iTunes .&lt;br /&gt;
&lt;br /&gt;
[[image:itunes.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
3. Delete the files from the &amp;quot;Phone Drive&amp;quot; app on your iPod.&lt;br /&gt;
- Open Phone Drive and navigate to the file.&lt;br /&gt;
- Swipe left on the file name.&lt;br /&gt;
- A red Delete button should appear, tap it to delete.&lt;br /&gt;
&lt;br /&gt;
====Video recorder==== &lt;br /&gt;
Connect the iPod to the computer with the usb cable.&lt;br /&gt;
&lt;br /&gt;
[[image:videorecorderr.JPG| 700px ]]&lt;br /&gt;
&lt;br /&gt;
In File Explorer, find your Apple iPod and go to: \Apple iPod\Internal Storage\DCIM&lt;br /&gt;
&lt;br /&gt;
===Removing data=== &lt;br /&gt;
To prevent data leaks, it is important that the data is completely removed from the iPod before the next person uses it.&lt;br /&gt;
&lt;br /&gt;
====Delete Voice Memos====&lt;br /&gt;
[[image:deleteIpodvoice.jpg| 750px ]]&lt;br /&gt;
&lt;br /&gt;
====Delete videos/pictures====&lt;br /&gt;
[[image:deleteIpod.jpg| 900px ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
[[Cameras]]&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Psychopy&amp;diff=6145</id>
		<title>Psychopy</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Psychopy&amp;diff=6145"/>
		<updated>2026-01-27T09:18:22Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{{Infobox software&lt;br /&gt;
| name                   = Psychopy&lt;br /&gt;
| logo                   = Psychopy Logo.png&lt;br /&gt;
| logo size              = 250px&lt;br /&gt;
| screenshot             = &lt;br /&gt;
| caption                = &lt;br /&gt;
| developer              = &lt;br /&gt;
| released               = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| discontinued           = &lt;br /&gt;
| latest release version = &lt;br /&gt;
| latest release date    = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| latest preview version = &lt;br /&gt;
| latest preview date    = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| installed version      = &lt;br /&gt;
| installed version date = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| status                 = Active&lt;br /&gt;
| programming language   = Python&lt;br /&gt;
| operating system       = &lt;br /&gt;
| platform               = &lt;br /&gt;
| size                   = &lt;br /&gt;
| language               = &lt;br /&gt;
| genre                  = &lt;br /&gt;
| license                = &lt;br /&gt;
| website                = &lt;br /&gt;
| resources              = &lt;br /&gt;
  {{Infobox tsg&lt;br /&gt;
    | child              = yes&lt;br /&gt;
    | manuals            = {{bulleted list&lt;br /&gt;
        | [https://www.socsci.ru.nl/wilberth/psychopy/index.html Course]&lt;br /&gt;
        | [[Media:TemplatePsychopy2019.zip|Course Template]]&lt;br /&gt;
    }}&lt;br /&gt;
    | downloads          = &lt;br /&gt;
        *  [https://gitlab.socsci.ru.nl/tsg/psychopy2024.2.4install  installation files for Psychopy2024.2.4] &lt;br /&gt;
  }}&lt;br /&gt;
}}&lt;br /&gt;
PsychoPy is an alternative to Presentation, e-Prime and Inquisit. It is a Python library and application that allows presentation of stimuli and collection of data for a wide range of neuroscience, psychology and psychophysics experiments. When used on DCC computers PsychoPy is guaranteed to be millisecond accurate. TSG does not support Psychopy Builder. Neither does TSG make any projects in Psychopy Builder. TSG can provide some labsupport when you are using the Builder. TSG does support Psychopy Coder and TSG uses Psychopy elements to create experiments. There are sample experiments to download from the infobox on this page.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
&lt;br /&gt;
[https://img.shields.io/badge/Python-3.10%2B-3776AB?logo=python&amp;amp;logoColor=white]&lt;br /&gt;
We recommend to use a modern 64-bit version of Python. In our labs, we currently have Python 3.10.11 64 bits installed. Psychopy recommends Python 3.10.11 or 3.8.10 TSG recommends Python 3.10.11&lt;br /&gt;
&lt;br /&gt;
Check our gitlab server https://gitlab.socsci.ru.nl/tsg/psychopy2024.2.4install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== For Pavlovia users ===&lt;br /&gt;
If you want to upload experiments to Pavlovia, you will need to install [https://github.com/git-for-windows/git/releases/download/v2.17.1.windows.1/Git-2.17.1-64-bit.exe Git-2.17.1-64-bit.exe] using these instructions: [https://gitlab.socsci.ru.nl/h.voogd/git-2.17.1.2-64-bit.exe/-/raw/master/GitInstall.docx GitInstall.docx]. Then, in&lt;br /&gt;
''System| Advanced system settings | Environment variables'', add the folder where ''git-daemon.exe'' is, to the PATH variable. Usually, that folder is named: 'C:\Program Files\Git\mingw64\libexec\git-core'.&lt;br /&gt;
&lt;br /&gt;
===Lab computer versioning===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A default version of Psychopy has been installed in the root of the Python3.10.11 64-bit version. This is also the default version when 'psychopy' is typed from the command prompt. It is also the default that opens when a .py file is double-clicked. It also can be started by clicking the appropriate icon on the desktop. One in a while, the default version is upgraded to a newer version. The older version is then still available in a virtualenv. And is available in the Windows Start Menu.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
When your script fails to load in Psychopy, because you need packages that are not installed on our labcomputers, please contact TSG.&lt;br /&gt;
&lt;br /&gt;
On the labcomputer, there is support for Spyder, PyCharm and Psychopy. &lt;br /&gt;
&lt;br /&gt;
=== For SR-Research Eyelink users ===&lt;br /&gt;
Pylink is installed on our lab computers.&lt;br /&gt;
&lt;br /&gt;
=== For SMI RED 500 and SMI HiSpeed Tower users ===&lt;br /&gt;
Include this file into your project: [https://gitlab.socsci.ru.nl/h.voogd/iviewxudp iViewXudp]. This should work on both 64-bit and 32-bit Python 3 versions.&lt;br /&gt;
&lt;br /&gt;
=== For Tobii Studio ===&lt;br /&gt;
Check this link to connect to Tobii Studio:&lt;br /&gt;
https://gitlab.socsci.ru.nl/h.voogd/tobiiclearviewtriggerapipython3&lt;br /&gt;
the tobii-research package is installed on our lab computers.&lt;br /&gt;
&lt;br /&gt;
=== For Tobii Pro Lab ===&lt;br /&gt;
Tobii-research is installed on our lab computers. Titta (https://github.com/marcus-nystrom/Titta) is not working on a two-computer-system.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
===VirtualEnv===&lt;br /&gt;
Some of the packages that are installed in the steps above, make it possible to make use of VirtualEnvs. A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.&lt;br /&gt;
&lt;br /&gt;
If you want to use virtualenvs on your on computer, In ''System| Advanced system settings | Environment variables'', make a new system variable with name '''WORKON_HOME''' and value ''C:\Users\Public\Envs\''.&lt;br /&gt;
And make a new system variable with name: '''PROJECT_HOME''' and value ''C:\Users\Public\Projects''. Your virtualenvs will now be stored in ''C:\Users\Public\Projects'' and projects in ''C:\Users\Public\Projects''. These are also the places where virtualenvs and projects are stored on the labcomputer.&lt;br /&gt;
&lt;br /&gt;
Open a command window with administrator rights and type:&amp;lt;br&amp;gt;&lt;br /&gt;
'''workon'''  to see a list of existing virtualenvs.&amp;lt;br&amp;gt;&lt;br /&gt;
'''workon &amp;lt;virtualenvname&amp;gt;''', where &amp;lt;virtualenvname&amp;gt; is the name of the virtualenv you want to use.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv &amp;lt;virtualenvname&amp;gt;''' to create a new and empty virtualenv.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv -p=310 &amp;lt;virtualenvname&amp;gt;''' to create a new and empty virtualenv that uses the installed Python3.10.11.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv -p=38 &amp;lt;virtualenvname&amp;gt;''' to create a new and empty virtualenv that uses the installed Python3.8.10.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv -p=310 --system-site-packages &amp;lt;virtualenvname&amp;gt;''' to create a new virtualenv that uses the installed Python3.10 and its site-packages.&amp;lt;br&amp;gt;&lt;br /&gt;
'''rmvirtualenv &amp;lt;virtualenvname&amp;gt;''' to remove the virtualenv with name &amp;lt;virtualenvname&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
'''deactivate''' to return to the defaults.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need a package that is not installed on our labcomputer, contact TSG, so that we can decide to add it to our standard installation, or to install it in a separate virtualenv. Do '''not''' use ''pip install'' and install anything in an existing virtualenv. Unless it is your own virtualenv. This might interfere with existing packages and might mess up other peoples projects. Instead, make your own virtualenv and install it there (use '''mkvirtualenv''' to create it, use '''workon''' to activate it, then use '''pip''' to install packages into your own virtualenv). Also, make a backup of your virtualenv, since when the labimage is updated, the newly created virtualenvs will be gone.&lt;br /&gt;
&lt;br /&gt;
=== Spyder ===&lt;br /&gt;
Spyder (Scientific Python Development Environment) is an IDE for Python. It can be run from a command prompt. If you want to use Spyder in the default Python environment, you can just type '''Spyder''' from the command prompt. If you want to use Spyder from a virtualenv, type '''workon &amp;lt;name of the virtualenv&amp;gt;''' and then type '''Spyder3'''. If you have created your own virtualenv, make sure that Spyder is installed.&lt;br /&gt;
&lt;br /&gt;
=== PyCharm ===&lt;br /&gt;
PyCharm is installed on our labcomputers. It is a Python IDE. In the lowerright corner, it will display its current Python environment. By clicking on that name, you can change the interpreter and choose from the existing virtualenvs that PyCharm knows. Or you can add your own virtualenv. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:PycharmInterpreters.png|thumb|none|PycharmInterpreters]]&lt;br /&gt;
&lt;br /&gt;
=== Batch files ===&lt;br /&gt;
If you are working from a virtualenv, other than the default, and you don't want to open a command window, type '''workon &amp;lt;virtualenv&amp;gt;''' and type '''python &amp;lt;myscript.py&amp;gt;''' every time, you might want to make a batch file. Create a text file, type:&amp;lt;br&amp;gt;&lt;br /&gt;
'''call workon &amp;lt;virtualenv&amp;gt;'''&amp;lt;br&amp;gt;&lt;br /&gt;
'''python &amp;lt;myscript.py&amp;gt;'''&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Save the file into the folder where your script is, but change the extension from ''.txt'' to ''.bat'', for example, save the file as ''startmyscript.bat''.&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Psychopy&amp;diff=6144</id>
		<title>Psychopy</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Psychopy&amp;diff=6144"/>
		<updated>2026-01-27T09:16:44Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{{Infobox software&lt;br /&gt;
| name                   = Psychopy&lt;br /&gt;
| logo                   = Psychopy Logo.png&lt;br /&gt;
| logo size              = 250px&lt;br /&gt;
| screenshot             = &lt;br /&gt;
| caption                = &lt;br /&gt;
| developer              = &lt;br /&gt;
| released               = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| discontinued           = &lt;br /&gt;
| latest release version = &lt;br /&gt;
| latest release date    = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| latest preview version = &lt;br /&gt;
| latest preview date    = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| installed version      = &lt;br /&gt;
| installed version date = &amp;lt;!-- {{Start date and age|YYYY|MM|DD|df=yes}} --&amp;gt;&lt;br /&gt;
| status                 = Active&lt;br /&gt;
| programming language   = Python&lt;br /&gt;
| operating system       = &lt;br /&gt;
| platform               = &lt;br /&gt;
| size                   = &lt;br /&gt;
| language               = &lt;br /&gt;
| genre                  = &lt;br /&gt;
| license                = &lt;br /&gt;
| website                = &lt;br /&gt;
| resources              = &lt;br /&gt;
  {{Infobox tsg&lt;br /&gt;
    | child              = yes&lt;br /&gt;
    | manuals            = {{bulleted list&lt;br /&gt;
        | [https://www.socsci.ru.nl/wilberth/psychopy/index.html Course]&lt;br /&gt;
        | [[Media:TemplatePsychopy2019.zip|Course Template]]&lt;br /&gt;
    }}&lt;br /&gt;
    | downloads          = &lt;br /&gt;
        *  [https://gitlab.socsci.ru.nl/tsg/psychopy2024.2.4install  installation files for Psychopy2024.2.4] &lt;br /&gt;
  }}&lt;br /&gt;
}}&lt;br /&gt;
PsychoPy is an alternative to Presentation, e-Prime and Inquisit. It is a Python library and application that allows presentation of stimuli and collection of data for a wide range of neuroscience, psychology and psychophysics experiments. When used on DCC computers PsychoPy is guaranteed to be millisecond accurate. TSG does not support Psychopy Builder. Neither does TSG make any projects in Psychopy Builder. TSG can provide some labsupport when you are using the Builder. TSG does support Psychopy Coder and TSG uses Psychopy elements to create experiments. There are sample experiments to download from the infobox on this page.&lt;br /&gt;
&lt;br /&gt;
==Installation==&lt;br /&gt;
&lt;br /&gt;
[[https://img.shields.io/badge/Python-3.10%2B-3776AB?logo=python&amp;amp;logoColor=white]]&lt;br /&gt;
We recommend to use a modern 64-bit version of Python. In our labs, we currently have Python 3.10.11 64 bits installed. Psychopy recommends Python 3.10.11 or 3.8.10 TSG recommends Python 3.10.11&lt;br /&gt;
&lt;br /&gt;
Check our gitlab server https://gitlab.socsci.ru.nl/tsg/psychopy2024.2.4install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== For Pavlovia users ===&lt;br /&gt;
If you want to upload experiments to Pavlovia, you will need to install [https://github.com/git-for-windows/git/releases/download/v2.17.1.windows.1/Git-2.17.1-64-bit.exe Git-2.17.1-64-bit.exe] using these instructions: [https://gitlab.socsci.ru.nl/h.voogd/git-2.17.1.2-64-bit.exe/-/raw/master/GitInstall.docx GitInstall.docx]. Then, in&lt;br /&gt;
''System| Advanced system settings | Environment variables'', add the folder where ''git-daemon.exe'' is, to the PATH variable. Usually, that folder is named: 'C:\Program Files\Git\mingw64\libexec\git-core'.&lt;br /&gt;
&lt;br /&gt;
===Lab computer versioning===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A default version of Psychopy has been installed in the root of the Python3.10.11 64-bit version. This is also the default version when 'psychopy' is typed from the command prompt. It is also the default that opens when a .py file is double-clicked. It also can be started by clicking the appropriate icon on the desktop. One in a while, the default version is upgraded to a newer version. The older version is then still available in a virtualenv. And is available in the Windows Start Menu.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
When your script fails to load in Psychopy, because you need packages that are not installed on our labcomputers, please contact TSG.&lt;br /&gt;
&lt;br /&gt;
On the labcomputer, there is support for Spyder, PyCharm and Psychopy. &lt;br /&gt;
&lt;br /&gt;
=== For SR-Research Eyelink users ===&lt;br /&gt;
Pylink is installed on our lab computers.&lt;br /&gt;
&lt;br /&gt;
=== For SMI RED 500 and SMI HiSpeed Tower users ===&lt;br /&gt;
Include this file into your project: [https://gitlab.socsci.ru.nl/h.voogd/iviewxudp iViewXudp]. This should work on both 64-bit and 32-bit Python 3 versions.&lt;br /&gt;
&lt;br /&gt;
=== For Tobii Studio ===&lt;br /&gt;
Check this link to connect to Tobii Studio:&lt;br /&gt;
https://gitlab.socsci.ru.nl/h.voogd/tobiiclearviewtriggerapipython3&lt;br /&gt;
the tobii-research package is installed on our lab computers.&lt;br /&gt;
&lt;br /&gt;
=== For Tobii Pro Lab ===&lt;br /&gt;
Tobii-research is installed on our lab computers. Titta (https://github.com/marcus-nystrom/Titta) is not working on a two-computer-system.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
===VirtualEnv===&lt;br /&gt;
Some of the packages that are installed in the steps above, make it possible to make use of VirtualEnvs. A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.&lt;br /&gt;
&lt;br /&gt;
If you want to use virtualenvs on your on computer, In ''System| Advanced system settings | Environment variables'', make a new system variable with name '''WORKON_HOME''' and value ''C:\Users\Public\Envs\''.&lt;br /&gt;
And make a new system variable with name: '''PROJECT_HOME''' and value ''C:\Users\Public\Projects''. Your virtualenvs will now be stored in ''C:\Users\Public\Projects'' and projects in ''C:\Users\Public\Projects''. These are also the places where virtualenvs and projects are stored on the labcomputer.&lt;br /&gt;
&lt;br /&gt;
Open a command window with administrator rights and type:&amp;lt;br&amp;gt;&lt;br /&gt;
'''workon'''  to see a list of existing virtualenvs.&amp;lt;br&amp;gt;&lt;br /&gt;
'''workon &amp;lt;virtualenvname&amp;gt;''', where &amp;lt;virtualenvname&amp;gt; is the name of the virtualenv you want to use.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv &amp;lt;virtualenvname&amp;gt;''' to create a new and empty virtualenv.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv -p=310 &amp;lt;virtualenvname&amp;gt;''' to create a new and empty virtualenv that uses the installed Python3.10.11.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv -p=38 &amp;lt;virtualenvname&amp;gt;''' to create a new and empty virtualenv that uses the installed Python3.8.10.&amp;lt;br&amp;gt;&lt;br /&gt;
'''mkvirtualenv -p=310 --system-site-packages &amp;lt;virtualenvname&amp;gt;''' to create a new virtualenv that uses the installed Python3.10 and its site-packages.&amp;lt;br&amp;gt;&lt;br /&gt;
'''rmvirtualenv &amp;lt;virtualenvname&amp;gt;''' to remove the virtualenv with name &amp;lt;virtualenvname&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
'''deactivate''' to return to the defaults.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need a package that is not installed on our labcomputer, contact TSG, so that we can decide to add it to our standard installation, or to install it in a separate virtualenv. Do '''not''' use ''pip install'' and install anything in an existing virtualenv. Unless it is your own virtualenv. This might interfere with existing packages and might mess up other peoples projects. Instead, make your own virtualenv and install it there (use '''mkvirtualenv''' to create it, use '''workon''' to activate it, then use '''pip''' to install packages into your own virtualenv). Also, make a backup of your virtualenv, since when the labimage is updated, the newly created virtualenvs will be gone.&lt;br /&gt;
&lt;br /&gt;
=== Spyder ===&lt;br /&gt;
Spyder (Scientific Python Development Environment) is an IDE for Python. It can be run from a command prompt. If you want to use Spyder in the default Python environment, you can just type '''Spyder''' from the command prompt. If you want to use Spyder from a virtualenv, type '''workon &amp;lt;name of the virtualenv&amp;gt;''' and then type '''Spyder3'''. If you have created your own virtualenv, make sure that Spyder is installed.&lt;br /&gt;
&lt;br /&gt;
=== PyCharm ===&lt;br /&gt;
PyCharm is installed on our labcomputers. It is a Python IDE. In the lowerright corner, it will display its current Python environment. By clicking on that name, you can change the interpreter and choose from the existing virtualenvs that PyCharm knows. Or you can add your own virtualenv. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:PycharmInterpreters.png|thumb|none|PycharmInterpreters]]&lt;br /&gt;
&lt;br /&gt;
=== Batch files ===&lt;br /&gt;
If you are working from a virtualenv, other than the default, and you don't want to open a command window, type '''workon &amp;lt;virtualenv&amp;gt;''' and type '''python &amp;lt;myscript.py&amp;gt;''' every time, you might want to make a batch file. Create a text file, type:&amp;lt;br&amp;gt;&lt;br /&gt;
'''call workon &amp;lt;virtualenv&amp;gt;'''&amp;lt;br&amp;gt;&lt;br /&gt;
'''python &amp;lt;myscript.py&amp;gt;'''&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Save the file into the folder where your script is, but change the extension from ''.txt'' to ''.bat'', for example, save the file as ''startmyscript.bat''.&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6143</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6143"/>
		<updated>2026-01-14T14:17:01Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python psychopy 2024.2.4===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
movie = visual.MovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while not movie.isFinished:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    movie.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
movie.stop()     # stop playback&lt;br /&gt;
del movie&lt;br /&gt;
gc.collect()&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
movie = visual.MovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
movie.play()&lt;br /&gt;
movie_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while not movie.isFinished:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    movie.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
movie.stop()     # stop playback&lt;br /&gt;
del movie&lt;br /&gt;
gc.collect()&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;tick_rhythm_combined_1min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '48000', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
The TSG recomends to use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check your os settings, audio file and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
{{See also|FFmpeg}}&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6142</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6142"/>
		<updated>2026-01-14T14:12:32Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video playback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
movie = visual.MovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while not movie.isFinished:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    movie.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
movie.stop()     # stop playback&lt;br /&gt;
del movie&lt;br /&gt;
gc.collect()&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
movie = visual.MovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
movie.play()&lt;br /&gt;
movie_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while not movie.isFinished:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    movie.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
movie.stop()     # stop playback&lt;br /&gt;
del movie&lt;br /&gt;
gc.collect()&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;tick_rhythm_combined_1min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '48000', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
The TSG recomends to use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check your os settings, audio file and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
{{See also|FFmpeg}}&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6052</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6052"/>
		<updated>2025-04-29T10:47:46Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;tick_rhythm_combined_1min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '48000', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
The TSG recomends to use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check your os settings, audio file and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6051</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6051"/>
		<updated>2025-04-29T09:49:23Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video encoding */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;tick_rhythm_combined_1min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
The TSG recomends to use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check your os settings, audio file and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6050</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6050"/>
		<updated>2025-04-29T09:47:14Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;tick_rhythm_combined_1min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check your os settings, audio file and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6049</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6049"/>
		<updated>2025-04-29T09:46:06Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check your os settings, audio file and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6048</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6048"/>
		<updated>2025-04-29T09:37:50Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video playback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6047</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6047"/>
		<updated>2025-04-29T09:33:48Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot; style=&amp;quot;width:100%&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;padding:5px&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6046</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6046"/>
		<updated>2025-04-29T09:31:32Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot; style=&amp;quot;width:100%&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6045</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6045"/>
		<updated>2025-04-29T09:28:32Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video encoding */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 [[media:Openh264-1.8.0-win64_.zip | codec(libx264)]]) &lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=File:Openh264-1.8.0-win64_.zip&amp;diff=6044</id>
		<title>File:Openh264-1.8.0-win64 .zip</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=File:Openh264-1.8.0-win64_.zip&amp;diff=6044"/>
		<updated>2025-04-29T09:26:24Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.js&amp;diff=6043</id>
		<title>User:P.dewater/common.js</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.js&amp;diff=6043"/>
		<updated>2025-04-29T09:22:03Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: Created page with &amp;quot;document.addEventListener('DOMContentLoaded', function () {     // Zoek alle codeblokken     document.querySelectorAll('.mw-highlight pre').forEach(function (preBlock) {...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;document.addEventListener('DOMContentLoaded', function () {&lt;br /&gt;
    // Zoek alle codeblokken&lt;br /&gt;
    document.querySelectorAll('.mw-highlight pre').forEach(function (preBlock) {&lt;br /&gt;
        // Maak een knop&lt;br /&gt;
        var button = document.createElement('button');&lt;br /&gt;
        button.innerText = 'Copy Code';&lt;br /&gt;
        button.style.marginBottom = '5px';&lt;br /&gt;
        button.style.padding = '4px 8px';&lt;br /&gt;
        button.style.fontSize = '12px';&lt;br /&gt;
        button.style.cursor = 'pointer';&lt;br /&gt;
&lt;br /&gt;
        // Functie om code zonder regelnummers te kopiëren&lt;br /&gt;
        button.addEventListener('click', function () {&lt;br /&gt;
            var codeText = '';&lt;br /&gt;
            preBlock.childNodes.forEach(function (node) {&lt;br /&gt;
                if (node.nodeType === Node.TEXT_NODE) {&lt;br /&gt;
                    codeText += node.textContent;&lt;br /&gt;
                } else if (node.nodeType === Node.ELEMENT_NODE) {&lt;br /&gt;
                    if (!node.classList.contains('linenos')) {&lt;br /&gt;
                        codeText += node.textContent;&lt;br /&gt;
                    }&lt;br /&gt;
                }&lt;br /&gt;
            });&lt;br /&gt;
&lt;br /&gt;
            navigator.clipboard.writeText(codeText).then(function () {&lt;br /&gt;
                button.innerText = 'Copied!';&lt;br /&gt;
                setTimeout(function () {&lt;br /&gt;
                    button.innerText = 'Copy Code';&lt;br /&gt;
                }, 2000);&lt;br /&gt;
            }, function (err) {&lt;br /&gt;
                console.error('Failed to copy: ', err);&lt;br /&gt;
            });&lt;br /&gt;
        });&lt;br /&gt;
&lt;br /&gt;
        // Voeg de knop toe vóór het codeblok&lt;br /&gt;
        preBlock.parentNode.insertBefore(button, preBlock);&lt;br /&gt;
    });&lt;br /&gt;
});&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.css&amp;diff=6042</id>
		<title>User:P.dewater/common.css</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.css&amp;diff=6042"/>
		<updated>2025-04-29T09:21:13Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* :P */&lt;br /&gt;
&lt;br /&gt;
.mw-wiki-logo {&lt;br /&gt;
  background-image: url(&amp;quot;http://avatarfiles.alphacoders.com/717/717.gif&amp;quot;);&lt;br /&gt;
}&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.css&amp;diff=6041</id>
		<title>User:P.dewater/common.css</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.css&amp;diff=6041"/>
		<updated>2025-04-29T09:12:10Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* :P */&lt;br /&gt;
&lt;br /&gt;
.mw-wiki-logo {&lt;br /&gt;
  background-image: url(&amp;quot;http://avatarfiles.alphacoders.com/717/717.gif&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
/* Voorkom kopiëren van regelnummers in codeblokken */&lt;br /&gt;
.syntaxhighlight .linenos {&lt;br /&gt;
  user-select: none;&lt;br /&gt;
  -moz-user-select: none;&lt;br /&gt;
  -webkit-user-select: none;&lt;br /&gt;
}&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.css&amp;diff=6038</id>
		<title>User:P.dewater/common.css</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=User:P.dewater/common.css&amp;diff=6038"/>
		<updated>2025-04-29T09:05:06Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* :P */&lt;br /&gt;
&lt;br /&gt;
.mw-wiki-logo {&lt;br /&gt;
  background-image: url(&amp;quot;http://avatarfiles.alphacoders.com/717/717.gif&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
/* Voorkom kopiëren van regelnummers in codeblokken */&lt;br /&gt;
.syntaxhighlight .line-numbers {&lt;br /&gt;
    user-select: none;&lt;br /&gt;
    -moz-user-select: none;&lt;br /&gt;
    -webkit-user-select: none;&lt;br /&gt;
}&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6037</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6037"/>
		<updated>2025-04-29T08:00:17Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Recommended FFmpeg Command */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 48000 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6036</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6036"/>
		<updated>2025-04-29T07:58:40Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Example demonstrating if video and audio encoding are correct:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
import json&lt;br /&gt;
&lt;br /&gt;
file_path = &amp;quot;C_dyad1_video2_241123.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def check_video_file(file_path):&lt;br /&gt;
    try:&lt;br /&gt;
        # Run ffprobe to get file metadata in JSON format&lt;br /&gt;
        result = subprocess.run(&lt;br /&gt;
            [&lt;br /&gt;
                &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_streams&amp;quot;,&lt;br /&gt;
                &amp;quot;-show_format&amp;quot;,&lt;br /&gt;
                &amp;quot;-print_format&amp;quot;, &amp;quot;json&amp;quot;,&lt;br /&gt;
                file_path&lt;br /&gt;
            ],&lt;br /&gt;
            stdout=subprocess.PIPE,&lt;br /&gt;
            stderr=subprocess.PIPE,&lt;br /&gt;
            text=True&lt;br /&gt;
        )&lt;br /&gt;
        metadata = json.loads(result.stdout)&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;Error running ffprobe: {e}&amp;quot;)&lt;br /&gt;
        return&lt;br /&gt;
    &lt;br /&gt;
    # Check for video stream&lt;br /&gt;
    video_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'video'), None)&lt;br /&gt;
    if video_stream:&lt;br /&gt;
        # Check video codec&lt;br /&gt;
        video_codec = video_stream.get('codec_name')&lt;br /&gt;
        if video_codec == 'h264':&lt;br /&gt;
            print(&amp;quot;Video codec: H.264&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video codec is NOT H.264 (Found: {video_codec})&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Extract and report frame rate&lt;br /&gt;
        if 'r_frame_rate' in video_stream:&lt;br /&gt;
            raw_frame_rate = video_stream['r_frame_rate']&lt;br /&gt;
            calculated_frame_rate = eval(raw_frame_rate)  # Convert string like &amp;quot;30/1&amp;quot; to float&lt;br /&gt;
            print(f&amp;quot;Frame rate: {calculated_frame_rate:.2f} FPS (raw: {raw_frame_rate})&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine raw frame rate from metadata.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
        # Check for constant frame rate&lt;br /&gt;
        if video_stream.get('avg_frame_rate'):&lt;br /&gt;
            avg_frame_rate = eval(video_stream['avg_frame_rate'])&lt;br /&gt;
            if abs(avg_frame_rate - calculated_frame_rate) &amp;lt; 0.01:&lt;br /&gt;
                print(&amp;quot;Frame rate: Constant&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(f&amp;quot;ERROR: Frame rate is NOT constant (avg_frame_rate: {avg_frame_rate:.2f} FPS)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(&amp;quot;ERROR: Could not determine average frame rate consistency.&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check for frame drops&lt;br /&gt;
        try:&lt;br /&gt;
            frame_info_result = subprocess.run(&lt;br /&gt;
                [&lt;br /&gt;
                    &amp;quot;ffprobe&amp;quot;,&lt;br /&gt;
                    &amp;quot;-v&amp;quot;, &amp;quot;error&amp;quot;,&lt;br /&gt;
                    &amp;quot;-select_streams&amp;quot;, &amp;quot;v:0&amp;quot;,&lt;br /&gt;
                    &amp;quot;-show_entries&amp;quot;, &amp;quot;frame=pkt_pts_time&amp;quot;,&lt;br /&gt;
                    &amp;quot;-of&amp;quot;, &amp;quot;csv=p=0&amp;quot;,&lt;br /&gt;
                    file_path&lt;br /&gt;
                ],&lt;br /&gt;
                stdout=subprocess.PIPE,&lt;br /&gt;
                stderr=subprocess.PIPE,&lt;br /&gt;
                text=True&lt;br /&gt;
            )&lt;br /&gt;
            # Filter out empty or invalid lines&lt;br /&gt;
            frame_times = [&lt;br /&gt;
                float(line.strip()) for line in frame_info_result.stdout.splitlines()&lt;br /&gt;
                if line.strip()  # Exclude empty lines&lt;br /&gt;
            ]&lt;br /&gt;
            expected_interval = 1.0 / calculated_frame_rate  # Expected time between frames&lt;br /&gt;
            frame_drops = [&lt;br /&gt;
                i for i, (t1, t2) in enumerate(zip(frame_times, frame_times[1:]))&lt;br /&gt;
                if abs(t2 - t1 - expected_interval) &amp;gt; 0.01  # Tolerance for irregularity&lt;br /&gt;
            ]&lt;br /&gt;
            if frame_drops:&lt;br /&gt;
                print(f&amp;quot;ERROR: Detected frame drops at frames: {frame_drops}&amp;quot;)&lt;br /&gt;
            else:&lt;br /&gt;
                print(&amp;quot;No frame drops detected.&amp;quot;)&lt;br /&gt;
        except Exception as e:&lt;br /&gt;
            print(f&amp;quot;Error analyzing frames for drops: {e}&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No video stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check for audio stream&lt;br /&gt;
    audio_stream = next((stream for stream in metadata['streams'] if stream['codec_type'] == 'audio'), None)&lt;br /&gt;
    if audio_stream:&lt;br /&gt;
        # Check audio codec&lt;br /&gt;
        audio_codec = audio_stream.get('codec_name')&lt;br /&gt;
        if audio_codec == 'pcm_s16le':&lt;br /&gt;
            print(&amp;quot;Audio codec: WAV (PCM)&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio codec is NOT WAV (PCM) (Found: {audio_codec})&amp;quot;)&lt;br /&gt;
        &lt;br /&gt;
        # Check sample rate&lt;br /&gt;
        sample_rate = audio_stream.get('sample_rate')&lt;br /&gt;
        if sample_rate == &amp;quot;44100&amp;quot;:&lt;br /&gt;
            print(&amp;quot;Audio sample rate: 44.1 kHz&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Audio sample rate is NOT 44.1 kHz (Found: {sample_rate} Hz)&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: No audio stream found&amp;quot;)&lt;br /&gt;
    &lt;br /&gt;
    # Check synchronization&lt;br /&gt;
    if video_stream and audio_stream:&lt;br /&gt;
        video_start_pts = float(video_stream.get('start_time', 0))&lt;br /&gt;
        audio_start_pts = float(audio_stream.get('start_time', 0))&lt;br /&gt;
        if abs(video_start_pts - audio_start_pts) &amp;lt; 0.01:  # Tolerance for synchronization&lt;br /&gt;
            print(&amp;quot;Video and audio are synchronized.&amp;quot;)&lt;br /&gt;
        else:&lt;br /&gt;
            print(f&amp;quot;ERROR: Video and audio are NOT synchronized. Start difference: {abs(video_start_pts - audio_start_pts):.3f} seconds&amp;quot;)&lt;br /&gt;
    else:&lt;br /&gt;
        print(&amp;quot;ERROR: Could not determine synchronization (missing video or audio streams).&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    check_video_file(file_path)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Camera&amp;diff=6035</id>
		<title>Camera</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Camera&amp;diff=6035"/>
		<updated>2025-04-29T07:48:32Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Webcams */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tsg&lt;br /&gt;
| name           = Cameras&lt;br /&gt;
| image          = Canon-XF405-Side-Front.jpg&lt;br /&gt;
| caption        = Canon XF405 Camcorder&lt;br /&gt;
| downloads      = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Cameras are video capturing devices that can turn your cat into a Tiktok or Instagram sensation. They can also be used to create stimulus material or record participants during your research experiments.&lt;br /&gt;
Different types of camera can be used for different purposes. When in doubt, ask the TSG which camera is most appropriate for your situation. &lt;br /&gt;
&lt;br /&gt;
Aside from technical suitability, it is also very important to consider data privacy and protection when choosing a type of camera for research purposes, especially when recording outside of our lab environment. Always talk to your data officer/steward/person before starting your project. They can advise you on the type of recording media you are allowed to use and/or protocols to follow to ensure sensitive data isn't leaked.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Table of Contents will be generated here --&amp;gt;&lt;br /&gt;
= Camera Types =&lt;br /&gt;
Below is an overview of available cameras within our faculty. Click on the individual pages for more information.&lt;br /&gt;
&lt;br /&gt;
== Ipods ==&lt;br /&gt;
{{see also|Ipod}}&lt;br /&gt;
The TSG recommends the use of Ipods for general observational recording of research participants. Most people will know how to operate the camera on their smartphone, so setting up and using the iPod should be familiar and easy. Our iPods are passcode-protected, providing an extra layer of data protection in case the device gets stolen or lost.&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, data protection&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image &amp;amp; audio quality&lt;br /&gt;
&lt;br /&gt;
== Webcams ==&lt;br /&gt;
{{see also|Screen_Recording_with_OBS#Webcam|https://www.elgato.com/ww/en/p/facecam-mk2 facecam}} &lt;br /&gt;
Webcams can be used to stream and record video directly to a [[Computers|lab computer]] or laptop. This is helpful in situations where you wish to link/sync video data to other measurements, and/or to process the video data in real-time. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, real-time monitoring&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image &amp;amp; audio quality&lt;br /&gt;
&lt;br /&gt;
== Camcorders ==&lt;br /&gt;
{{see also|Camcorders}}&lt;br /&gt;
(High-end) camcorders are primarily used for recording stimuli material, or in cases where high image quality is required. Due to their larger sensor size and more advanced control over shutter speed, aperture, etc., camcorders can offer a much higher image quality than other cameras listed on this page, especially in low-light conditions. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Image quality, image controls&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Ease of use&lt;br /&gt;
&lt;br /&gt;
== 360 Degree Cameras ==&lt;br /&gt;
{{see also|Insta360 X3}}&lt;br /&gt;
360 degree cameras are special in that they can record a 360 degree view around them. You can use a 360 degree camera to create unique stimuli material, or in situations where you wish to record multiple subjects at once and placing multiple normal cameras is not an option.&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Unique 360 degree view&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Data security, file sizes, special editing software required&lt;br /&gt;
&lt;br /&gt;
== Surveillance Cameras ==&lt;br /&gt;
{{see also|Surveillance Camera}}&lt;br /&gt;
Surveillance cameras are, at least in our case, effectively webcams with a network connection. They are primarily used for monitoring only (not recording).&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, real-time monitoring&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image quality&lt;br /&gt;
&lt;br /&gt;
== Motion Capture Cameras ==&lt;br /&gt;
{{see also|Qualisys|Optotrak}}&lt;br /&gt;
Motion capture cameras are highly specialized cameras used for tracking optical markers. What makes these cameras special is their global shutter (as opposed to a rolling shutter found in most other cameras, which is much cheaper but can create [https://en.wikipedia.org/wiki/Rolling_shutter distortions]), and their sensitivity to the infrared spectrum (wherein motion capture markers are visible).&lt;br /&gt;
While it is possible to use some of our [[Qualisys]] cameras for recording regular video, it is not advised for any purposes other than motion capture. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Global shutter, high-speed marker tracking&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Everything else&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==References== &amp;lt;!-- Optional --&amp;gt;&lt;br /&gt;
&amp;lt;references /&amp;gt; &amp;lt;!-- Automatically generates list of references using the &amp;lt;ref&amp;gt;&amp;lt;/ref&amp;gt; tags. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Camera&amp;diff=6034</id>
		<title>Camera</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Camera&amp;diff=6034"/>
		<updated>2025-04-29T07:48:14Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Webcams */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tsg&lt;br /&gt;
| name           = Cameras&lt;br /&gt;
| image          = Canon-XF405-Side-Front.jpg&lt;br /&gt;
| caption        = Canon XF405 Camcorder&lt;br /&gt;
| downloads      = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Cameras are video capturing devices that can turn your cat into a Tiktok or Instagram sensation. They can also be used to create stimulus material or record participants during your research experiments.&lt;br /&gt;
Different types of camera can be used for different purposes. When in doubt, ask the TSG which camera is most appropriate for your situation. &lt;br /&gt;
&lt;br /&gt;
Aside from technical suitability, it is also very important to consider data privacy and protection when choosing a type of camera for research purposes, especially when recording outside of our lab environment. Always talk to your data officer/steward/person before starting your project. They can advise you on the type of recording media you are allowed to use and/or protocols to follow to ensure sensitive data isn't leaked.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Table of Contents will be generated here --&amp;gt;&lt;br /&gt;
= Camera Types =&lt;br /&gt;
Below is an overview of available cameras within our faculty. Click on the individual pages for more information.&lt;br /&gt;
&lt;br /&gt;
== Ipods ==&lt;br /&gt;
{{see also|Ipod}}&lt;br /&gt;
The TSG recommends the use of Ipods for general observational recording of research participants. Most people will know how to operate the camera on their smartphone, so setting up and using the iPod should be familiar and easy. Our iPods are passcode-protected, providing an extra layer of data protection in case the device gets stolen or lost.&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, data protection&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image &amp;amp; audio quality&lt;br /&gt;
&lt;br /&gt;
== Webcams ==&lt;br /&gt;
{{see also|Screen_Recording_with_OBS#Webcam|[https://www.elgato.com/ww/en/p/facecam-mk2 facecam]}} &lt;br /&gt;
Webcams can be used to stream and record video directly to a [[Computers|lab computer]] or laptop. This is helpful in situations where you wish to link/sync video data to other measurements, and/or to process the video data in real-time. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, real-time monitoring&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image &amp;amp; audio quality&lt;br /&gt;
&lt;br /&gt;
== Camcorders ==&lt;br /&gt;
{{see also|Camcorders}}&lt;br /&gt;
(High-end) camcorders are primarily used for recording stimuli material, or in cases where high image quality is required. Due to their larger sensor size and more advanced control over shutter speed, aperture, etc., camcorders can offer a much higher image quality than other cameras listed on this page, especially in low-light conditions. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Image quality, image controls&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Ease of use&lt;br /&gt;
&lt;br /&gt;
== 360 Degree Cameras ==&lt;br /&gt;
{{see also|Insta360 X3}}&lt;br /&gt;
360 degree cameras are special in that they can record a 360 degree view around them. You can use a 360 degree camera to create unique stimuli material, or in situations where you wish to record multiple subjects at once and placing multiple normal cameras is not an option.&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Unique 360 degree view&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Data security, file sizes, special editing software required&lt;br /&gt;
&lt;br /&gt;
== Surveillance Cameras ==&lt;br /&gt;
{{see also|Surveillance Camera}}&lt;br /&gt;
Surveillance cameras are, at least in our case, effectively webcams with a network connection. They are primarily used for monitoring only (not recording).&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, real-time monitoring&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image quality&lt;br /&gt;
&lt;br /&gt;
== Motion Capture Cameras ==&lt;br /&gt;
{{see also|Qualisys|Optotrak}}&lt;br /&gt;
Motion capture cameras are highly specialized cameras used for tracking optical markers. What makes these cameras special is their global shutter (as opposed to a rolling shutter found in most other cameras, which is much cheaper but can create [https://en.wikipedia.org/wiki/Rolling_shutter distortions]), and their sensitivity to the infrared spectrum (wherein motion capture markers are visible).&lt;br /&gt;
While it is possible to use some of our [[Qualisys]] cameras for recording regular video, it is not advised for any purposes other than motion capture. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Global shutter, high-speed marker tracking&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Everything else&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==References== &amp;lt;!-- Optional --&amp;gt;&lt;br /&gt;
&amp;lt;references /&amp;gt; &amp;lt;!-- Automatically generates list of references using the &amp;lt;ref&amp;gt;&amp;lt;/ref&amp;gt; tags. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Camera&amp;diff=6033</id>
		<title>Camera</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Camera&amp;diff=6033"/>
		<updated>2025-04-29T07:47:40Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Webcams */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tsg&lt;br /&gt;
| name           = Cameras&lt;br /&gt;
| image          = Canon-XF405-Side-Front.jpg&lt;br /&gt;
| caption        = Canon XF405 Camcorder&lt;br /&gt;
| downloads      = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Cameras are video capturing devices that can turn your cat into a Tiktok or Instagram sensation. They can also be used to create stimulus material or record participants during your research experiments.&lt;br /&gt;
Different types of camera can be used for different purposes. When in doubt, ask the TSG which camera is most appropriate for your situation. &lt;br /&gt;
&lt;br /&gt;
Aside from technical suitability, it is also very important to consider data privacy and protection when choosing a type of camera for research purposes, especially when recording outside of our lab environment. Always talk to your data officer/steward/person before starting your project. They can advise you on the type of recording media you are allowed to use and/or protocols to follow to ensure sensitive data isn't leaked.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Table of Contents will be generated here --&amp;gt;&lt;br /&gt;
= Camera Types =&lt;br /&gt;
Below is an overview of available cameras within our faculty. Click on the individual pages for more information.&lt;br /&gt;
&lt;br /&gt;
== Ipods ==&lt;br /&gt;
{{see also|Ipod}}&lt;br /&gt;
The TSG recommends the use of Ipods for general observational recording of research participants. Most people will know how to operate the camera on their smartphone, so setting up and using the iPod should be familiar and easy. Our iPods are passcode-protected, providing an extra layer of data protection in case the device gets stolen or lost.&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, data protection&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image &amp;amp; audio quality&lt;br /&gt;
&lt;br /&gt;
== Webcams ==&lt;br /&gt;
{{see also|Screen_Recording_with_OBS#Webcam}} [https://www.elgato.com/ww/en/p/facecam-mk2 facecam]&lt;br /&gt;
Webcams can be used to stream and record video directly to a [[Computers|lab computer]] or laptop. This is helpful in situations where you wish to link/sync video data to other measurements, and/or to process the video data in real-time. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, real-time monitoring&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image &amp;amp; audio quality&lt;br /&gt;
&lt;br /&gt;
== Camcorders ==&lt;br /&gt;
{{see also|Camcorders}}&lt;br /&gt;
(High-end) camcorders are primarily used for recording stimuli material, or in cases where high image quality is required. Due to their larger sensor size and more advanced control over shutter speed, aperture, etc., camcorders can offer a much higher image quality than other cameras listed on this page, especially in low-light conditions. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Image quality, image controls&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Ease of use&lt;br /&gt;
&lt;br /&gt;
== 360 Degree Cameras ==&lt;br /&gt;
{{see also|Insta360 X3}}&lt;br /&gt;
360 degree cameras are special in that they can record a 360 degree view around them. You can use a 360 degree camera to create unique stimuli material, or in situations where you wish to record multiple subjects at once and placing multiple normal cameras is not an option.&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Unique 360 degree view&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Data security, file sizes, special editing software required&lt;br /&gt;
&lt;br /&gt;
== Surveillance Cameras ==&lt;br /&gt;
{{see also|Surveillance Camera}}&lt;br /&gt;
Surveillance cameras are, at least in our case, effectively webcams with a network connection. They are primarily used for monitoring only (not recording).&lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Ease of use, real-time monitoring&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Limited options, image quality&lt;br /&gt;
&lt;br /&gt;
== Motion Capture Cameras ==&lt;br /&gt;
{{see also|Qualisys|Optotrak}}&lt;br /&gt;
Motion capture cameras are highly specialized cameras used for tracking optical markers. What makes these cameras special is their global shutter (as opposed to a rolling shutter found in most other cameras, which is much cheaper but can create [https://en.wikipedia.org/wiki/Rolling_shutter distortions]), and their sensitivity to the infrared spectrum (wherein motion capture markers are visible).&lt;br /&gt;
While it is possible to use some of our [[Qualisys]] cameras for recording regular video, it is not advised for any purposes other than motion capture. &lt;br /&gt;
&lt;br /&gt;
'''Pros:''' Global shutter, high-speed marker tracking&lt;br /&gt;
&lt;br /&gt;
'''Cons:''' Everything else&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==References== &amp;lt;!-- Optional --&amp;gt;&lt;br /&gt;
&amp;lt;references /&amp;gt; &amp;lt;!-- Automatically generates list of references using the &amp;lt;ref&amp;gt;&amp;lt;/ref&amp;gt; tags. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6032</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6032"/>
		<updated>2025-04-29T07:42:09Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video encoding */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the [https://www.elgato.com/ww/en/p/facecam-mk2 facecam] for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6031</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6031"/>
		<updated>2025-04-29T07:39:37Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconnected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6030</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6030"/>
		<updated>2025-04-28T14:57:50Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video playback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
Note that the Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher. Later in this wiki we will explain how to build audio and video. We will start with playing video, both with and without audio. &lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6029</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6029"/>
		<updated>2025-04-28T14:54:31Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
The Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6028</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6028"/>
		<updated>2025-04-28T14:48:41Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Editing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
The Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
Another one is DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6027</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6027"/>
		<updated>2025-04-28T14:48:10Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* FFmpeg */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
The Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6026</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6026"/>
		<updated>2025-04-28T14:47:29Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video playback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
The Lab Computer displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6025</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6025"/>
		<updated>2025-04-28T14:47:00Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Enumeration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use '''Shotcut''', a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6024</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6024"/>
		<updated>2025-04-28T14:46:21Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enumeration===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6023</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6023"/>
		<updated>2025-04-28T14:45:38Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===conclusion===&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6022</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6022"/>
		<updated>2025-04-28T14:45:09Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===conclusion===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
- '''Ensure Low Latency''': If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
- '''Avoid Resampling''': If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
- '''Testing''': Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6021</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6021"/>
		<updated>2025-04-28T14:42:23Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* FFmpeg */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===conclusion===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
- Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
- Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
- Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6020</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6020"/>
		<updated>2025-04-28T14:40:37Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* FFmpeg */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6019</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6019"/>
		<updated>2025-04-28T14:40:11Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Recommended FFmpeg Command */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 -c:a pcm_s16le -ar 44100 -fflags +genpts -async 1 output.mp4&lt;br /&gt;
	-c:v libx264: Encode video using H.264.&lt;br /&gt;
	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
	-ar 48000: Sets audio sample rate to 48.0 kHz.&lt;br /&gt;
	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6018</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6018"/>
		<updated>2025-04-28T14:38:07Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Recommended FFmpeg Command */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6017</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6017"/>
		<updated>2025-04-28T14:26:22Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video playback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to disconnect audio from video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
input_file = 'tick_rhythm_combined_1min.mp4'&lt;br /&gt;
&lt;br /&gt;
directory = os.path.dirname(input_file)&lt;br /&gt;
base_name = os.path.splitext(os.path.basename(input_file))[0]&lt;br /&gt;
&lt;br /&gt;
output_video = os.path.join(directory, f&amp;quot;{base_name}_video_only.mp4&amp;quot;)&lt;br /&gt;
output_audio = os.path.join(directory, f&amp;quot;{base_name}_audio_only.wav&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-an', output_video])&lt;br /&gt;
&lt;br /&gt;
subprocess.run(['ffmpeg', '-i', input_file, '-vn', '-acodec', 'pcm_s16le', '-ar', '44100', output_audio])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Video saved to: {output_video}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio saved to: {output_audio}&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to combine audio and video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
import os&lt;br /&gt;
import subprocess&lt;br /&gt;
&lt;br /&gt;
# --- Inputs&lt;br /&gt;
video_file = 'tick_rhythm_combined_1min_video_only.mp4'   # Your video-only file&lt;br /&gt;
audio_file = 'mic_segment.wav'                            # Your trimmed audio&lt;br /&gt;
output_file = 'final_synced_output.mp4'                   # Output file name&lt;br /&gt;
&lt;br /&gt;
# --- FFmpeg command to combine&lt;br /&gt;
subprocess.run([&lt;br /&gt;
    'ffmpeg',&lt;br /&gt;
    '-i', video_file,&lt;br /&gt;
    '-i', audio_file,&lt;br /&gt;
    '-c:v', 'copy',               # Copy video stream as-is&lt;br /&gt;
    '-c:a', 'aac',                # Encode audio with AAC (widely compatible)&lt;br /&gt;
    '-shortest',                 # Trim to the shortest stream (prevents overhang)&lt;br /&gt;
    output_file&lt;br /&gt;
])&lt;br /&gt;
&lt;br /&gt;
print(f&amp;quot;Synchronized video saved to: {output_file}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6016</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6016"/>
		<updated>2025-04-28T14:23:27Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0], &lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6015</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6015"/>
		<updated>2025-04-28T14:22:48Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Video playback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0],  # Center of the window&lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6014</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6014"/>
		<updated>2025-04-28T14:22:09Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video playback==&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to play a video with audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
logging.console.setLevel(logging.WARNING)&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
from psychopy.hardware import keyboard&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_combined_30min.mp4&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1024, 768), fullscr=False, color=(0, 0, 0))&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    autoStart= False&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kb = keyboard.Keyboard()&lt;br /&gt;
&lt;br /&gt;
# Play the video&lt;br /&gt;
win.flip()&lt;br /&gt;
core.wait(3.0)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
# Main loop for video playback&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    keys = kb.getKeys(['q'], waitRelease=True)&lt;br /&gt;
    if 'q' in keys:&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example demonstrating how to play a video with audio disconected:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
from psychopy import logging, prefs&lt;br /&gt;
logging.console.setLevel(logging.DEBUG)&lt;br /&gt;
from psychopy import visual, core, sound, event&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioLatencyMode'] = 2&lt;br /&gt;
&lt;br /&gt;
# File paths for video and audio&lt;br /&gt;
video_file = &amp;quot;tick_rhythm_30min.mp4&amp;quot;&lt;br /&gt;
audio_file = &amp;quot;tick_rhythm_30min.wav&amp;quot;&lt;br /&gt;
&lt;br /&gt;
win = visual.Window(size=(1280, 720), fullscr=False, color=(0, 0, 0), units=&amp;quot;pix&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
video = visual.VlcMovieStim(&lt;br /&gt;
    win, filename=video_file,&lt;br /&gt;
    size=None,  # Use the native video size&lt;br /&gt;
    pos=[0, 0],  # Center of the window&lt;br /&gt;
    flipVert=False,&lt;br /&gt;
    flipHoriz=False,&lt;br /&gt;
    loop=False,&lt;br /&gt;
    autoStart=False,&lt;br /&gt;
    noAudio=True,&lt;br /&gt;
    volume=100,&lt;br /&gt;
    name='myMovie'&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Load the audio&lt;br /&gt;
audio = sound.Sound(audio_file, -1)&lt;br /&gt;
&lt;br /&gt;
# Synchronize audio and video playback&lt;br /&gt;
win.flip()&lt;br /&gt;
time.sleep(5)&lt;br /&gt;
 &lt;br /&gt;
audio.play()&lt;br /&gt;
time.sleep(0.04)&lt;br /&gt;
video.play()&lt;br /&gt;
video_start_time = core.getTime()&lt;br /&gt;
&lt;br /&gt;
while video.status != visual.FINISHED:&lt;br /&gt;
    # Draw the current video frame&lt;br /&gt;
    video.draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
&lt;br /&gt;
    # Check for keypress to quit&lt;br /&gt;
    if &amp;quot;q&amp;quot; in event.getKeys():&lt;br /&gt;
        audio.stop()&lt;br /&gt;
        break&lt;br /&gt;
&lt;br /&gt;
# Close the PsychoPy window&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6013</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6013"/>
		<updated>2025-04-28T13:45:19Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Windows Settings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
right click background → Display settings → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6012</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6012"/>
		<updated>2025-04-28T13:43:05Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
Open Settings → System → Display → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6011</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6011"/>
		<updated>2025-04-28T13:40:09Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Windows Settings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sound → Playback → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Windows Settings==&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
Open Settings → System → Display → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6010</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6010"/>
		<updated>2025-04-28T13:38:51Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Windows Settings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Playback Devices → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   - Set Default Format to 48000 Hz, 16 bit, Studio Quality.&lt;br /&gt;
&lt;br /&gt;
   - Disable sound enhancements.&lt;br /&gt;
&lt;br /&gt;
   - In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   - Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   - In the same Advanced tab.&lt;br /&gt;
&lt;br /&gt;
   - Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   - Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Windows Settings==&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
Open Settings → System → Display → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6009</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6009"/>
		<updated>2025-04-28T13:37:17Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Audio encoding */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Windows Settings===&lt;br /&gt;
Windows 10 Settings to check&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   Playback Devices → right-click → Properties → Advanced Tab:&lt;br /&gt;
&lt;br /&gt;
   Set Default Format to 48000 Hz, 16 bit, Studio Quality&lt;br /&gt;
&lt;br /&gt;
   Disable sound enhancements:&lt;br /&gt;
&lt;br /&gt;
   In the same properties window, go to Enhancements tab → Disable all enhancements.&lt;br /&gt;
&lt;br /&gt;
   Exclusive Mode:&lt;br /&gt;
&lt;br /&gt;
   In the same Advanced tab:&lt;br /&gt;
&lt;br /&gt;
   Allow applications to take exclusive control of this device → CHECKED&lt;br /&gt;
&lt;br /&gt;
   Give exclusive mode applications priority → CHECKED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Windows Settings==&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
Open Settings → System → Display → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
	<entry>
		<id>http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6008</id>
		<title>Video Playback</title>
		<link rel="alternate" type="text/html" href="http://tsgdoc.socsci.ru.nl/index.php?title=Video_Playback&amp;diff=6008"/>
		<updated>2025-04-28T13:34:11Z</updated>

		<summary type="html">&lt;p&gt;P.dewater: /* Audio Settings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using video in your experiment, especially when presenting time-critical stimuli, special care should be taken to optimize the video and audio settings on multiple levels (hardware, OS, script), as many things can go wrong along the way.&lt;br /&gt;
&lt;br /&gt;
This page outlines some best practices; however, we advise to always consult a TSG member if you plan to run a video experiment in the labs.&lt;br /&gt;
&lt;br /&gt;
==Video encoding==&lt;br /&gt;
When recording video for stimulus material or as input for your experiment, please:&lt;br /&gt;
Use a high-quality camera, with settings appropriate for your application (e.g., frame rate, resolution).&lt;br /&gt;
Use a high-quality recorder or capture device, capable of recording at 1080p (1920×1080) and 60fps or higher.&lt;br /&gt;
Stabilize the camera and avoid automatic exposure, white balance, or focus during recording to prevent inconsistencies.&lt;br /&gt;
Record in a controlled environment with consistent lighting and minimal background distractions.&lt;br /&gt;
You can use the '''facecam''' for high quality video recording.&lt;br /&gt;
&lt;br /&gt;
===Video Settings===&lt;br /&gt;
We recommend using the following settings:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!File format &lt;br /&gt;
|.mp4 (H.264 codec(libx264)) ik wil hier een link naar de dll?&lt;br /&gt;
|-&lt;br /&gt;
!Frame rate &lt;br /&gt;
|60 fps (frames per second)&lt;br /&gt;
|-&lt;br /&gt;
!Resolution&lt;br /&gt;
|1920×1080 (Full HD) or match your experiment's display settings&lt;br /&gt;
|-&lt;br /&gt;
!Bitrate &lt;br /&gt;
|10-20 Mbps for Full HD video&lt;br /&gt;
|-&lt;br /&gt;
!Constant Frame Rate (CFR)&lt;br /&gt;
|enforce a constant frame rate&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to record a video with a facecam:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import datetime&lt;br /&gt;
import cv2&lt;br /&gt;
import ctypes&lt;br /&gt;
import ffmpegcv&lt;br /&gt;
&lt;br /&gt;
#set sleep to 1ms accuracy&lt;br /&gt;
winmm = ctypes.WinDLL('winmm')&lt;br /&gt;
winmm.timeBeginPeriod(1)&lt;br /&gt;
&lt;br /&gt;
def configure_webcam(cam_id, width=1920, height=1080, fps=60):&lt;br /&gt;
    cap = cv2.VideoCapture(cam_id, cv2.CAP_DSHOW)&lt;br /&gt;
    if not cap.isOpened():&lt;br /&gt;
        print(f&amp;quot;Error: Couldn't open webcam {cam_id}.&amp;quot;)&lt;br /&gt;
        return None&lt;br /&gt;
&lt;br /&gt;
    # Try to set each property&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)&lt;br /&gt;
    cap.set(cv2.CAP_PROP_FPS, fps)&lt;br /&gt;
&lt;br /&gt;
    # Read back the values&lt;br /&gt;
    actual_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)&lt;br /&gt;
    actual_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)&lt;br /&gt;
    actual_fps = cap.get(cv2.CAP_PROP_FPS)&lt;br /&gt;
&lt;br /&gt;
    print(f&amp;quot;Resolution set to: {actual_width}x{actual_height}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;FPS set to: {actual_fps}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    return cap&lt;br /&gt;
&lt;br /&gt;
def getWebcamData():&lt;br /&gt;
    global frame_width&lt;br /&gt;
    global frame_height&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;opening webcam...&amp;quot;)&lt;br /&gt;
    camera = configure_webcam(1, frame_width, frame_height)&lt;br /&gt;
    time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')&lt;br /&gt;
    file_name = time_stamp +'_output.avi'&lt;br /&gt;
    video_writer = ffmpegcv.VideoWriter(file_name, 'h264', fps=freq)&lt;br /&gt;
    &lt;br /&gt;
    while True:&lt;br /&gt;
        grabbed = camera.grab()&lt;br /&gt;
        if grabbed:&lt;br /&gt;
            grabbed, frame = camera.retrieve()&lt;br /&gt;
            &lt;br /&gt;
            video_writer.write(frame)  # Write the video to the file system&lt;br /&gt;
            &lt;br /&gt;
            frame = cv2.resize(frame, (int(frame_width/4),int(frame_height/4)))&lt;br /&gt;
            cv2.imshow(&amp;quot;Frame&amp;quot;, frame)  # show the frame to our screen&lt;br /&gt;
        &lt;br /&gt;
        if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):&lt;br /&gt;
            break&lt;br /&gt;
&lt;br /&gt;
freq = 60&lt;br /&gt;
frame_width = 1920 &lt;br /&gt;
frame_height = 1080&lt;br /&gt;
&lt;br /&gt;
getWebcamData()&lt;br /&gt;
&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Audio encoding==&lt;br /&gt;
===Audio Settings===&lt;br /&gt;
We recommend using the following settings for audio:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Codec&lt;br /&gt;
|lossless or high-quality codecs&lt;br /&gt;
|-&lt;br /&gt;
!PCM (WAV)&lt;br /&gt;
|uncompressed&lt;br /&gt;
|-&lt;br /&gt;
!Sample Rate&lt;br /&gt;
|48 kHz&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Set your audio for low-latency, high-accuracy playback with ffmpeg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   ffmpeg -i input.wav -ar 48000 -ac 2 -sample_fmt s16 output_fixed.wav&lt;br /&gt;
&lt;br /&gt;
   Explanation:&lt;br /&gt;
   -ar 48000 → Set sample rate to 48000 Hz (standard for ASIO/Windows audio, matches most soundcards)&lt;br /&gt;
   -ac 2 → Set 2 channels (stereo)&lt;br /&gt;
   -sample_fmt s16 → Use 16-bit signed integer samples&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to check and play your audio:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
&lt;br /&gt;
import psychopy&lt;br /&gt;
print(psychopy.__version__)&lt;br /&gt;
import sys&lt;br /&gt;
print(sys.version)&lt;br /&gt;
&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import prefs&lt;br /&gt;
from psychopy import visual, core, event&lt;br /&gt;
&lt;br /&gt;
from psychopy.sound import backend_ptb&lt;br /&gt;
# 0: No special settings (default, not optimized)&lt;br /&gt;
# 1: Try low-latency but allow some delay&lt;br /&gt;
# 2: Aggressive low-latency&lt;br /&gt;
# 3: Exclusive mode, lowest latency but may not work on all systems&lt;br /&gt;
backend_ptb.SoundPTB.latencyMode = 2&lt;br /&gt;
&lt;br /&gt;
prefs.hardware['audioLib'] = ['PTB']&lt;br /&gt;
prefs.hardware['audioDriver'] = ['ASIO']&lt;br /&gt;
prefs.hardware['audioDevice'] = ['ASIO4ALL v2']&lt;br /&gt;
from psychopy import sound&lt;br /&gt;
&lt;br /&gt;
# --- OS-level audio device sample rate ---&lt;br /&gt;
default_output = sd.query_devices(kind='output')&lt;br /&gt;
print(&amp;quot;\nDefault output device info (OS level):&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Name: {default_output['name']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Default Sample Rate: {default_output['default_samplerate']} Hz&amp;quot;)&lt;br /&gt;
print(f&amp;quot;  Max Output Channels: {default_output['max_output_channels']}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Confirm the audio library and output settings&lt;br /&gt;
print(f&amp;quot;Using {sound.audioLib} for sound playback.&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio library options: {prefs.hardware['audioLib']}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio driver: {prefs.hardware.get('audioDriver', 'Default')}&amp;quot;)&lt;br /&gt;
print(f&amp;quot;Audio device: {prefs.hardware.get('audioDevice', 'Default')}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
audio_file = 'tick_rhythm_5min.wav'&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Creating sound...&amp;quot;)&lt;br /&gt;
wave_file = sound.Sound(audio_file)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Playing sound...&amp;quot;)&lt;br /&gt;
wave_file.play()&lt;br /&gt;
&lt;br /&gt;
while not keyboard.is_pressed('q'):&lt;br /&gt;
    pass&lt;br /&gt;
&lt;br /&gt;
# Clean up&lt;br /&gt;
print(&amp;quot;Exiting...&amp;quot;)&lt;br /&gt;
win.close()&lt;br /&gt;
core.quit()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==FFmpeg==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Ensure the audio and video streams have consistent timestamps: &lt;br /&gt;
&lt;br /&gt;
FFmpeg Options: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
       -fflags +genpts: Generates accurate presentation timestamps (PTS) for the video.&lt;br /&gt;
&lt;br /&gt;
       -async 1: Synchronizes audio and video when they drift.&lt;br /&gt;
&lt;br /&gt;
       -map 0:v:0 and -map 0:a:0: Explicitly map video and audio streams to avoid accidental mismatches.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
Example demonstrating how to use ffmpeg:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Recommended FFmpeg Command===&lt;br /&gt;
Here’s a command that encodes video and audio while maintaining high time accuracy:&lt;br /&gt;
ffmpeg -i input.mp4 \&lt;br /&gt;
       -c:v libx264 -preset slow -crf 18 -vsync cfr -g 30 \&lt;br /&gt;
       -c:a pcm_s16le -ar 44100 \&lt;br /&gt;
       -fflags +genpts -async 1 \&lt;br /&gt;
       output.mp4&lt;br /&gt;
•	-c:v libx264: Encode video using H.264.&lt;br /&gt;
•	-preset slow: Optimize for quality and compression efficiency.&lt;br /&gt;
•	-crf 18: Adjusts quality (lower = better; range: 0–51).&lt;br /&gt;
•	-vsync cfr: Enforces constant frame rate.&lt;br /&gt;
•	-c:a pcm_s16le: Encodes audio in uncompressed WAV format.&lt;br /&gt;
•	-ar 44100: Sets audio sample rate to 44.1 kHz.&lt;br /&gt;
•	-fflags +genpts: Ensures accurate timestamps.&lt;br /&gt;
•	-async 1: Synchronizes audio and video streams.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
•	Ensure Low Latency: If you're processing video/audio in real time, use low-latency settings (e.g., -tune zerolatency for H.264).&lt;br /&gt;
•	Avoid Resampling: If possible, use the original frame rate and sample rate to avoid timing mismatches.&lt;br /&gt;
•	Testing: Always test playback on different devices or players to confirm synchronization.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use Shotcut, a simple open-source editor, available here: https://shotcut.org/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [[Lab Computer]] displays are typically set to 1920×1080 at 120Hz. We found that this is sufficient for most applications. There are possibilities to go higher.&lt;br /&gt;
&lt;br /&gt;
==Editing==&lt;br /&gt;
We recommend using DaVinci Resolve for editing and converting video files. DaVinci Resolve is a free, professional-grade editing program, available here: https://www.blackmagicdesign.com/products/davinciresolve&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Windows Settings==&lt;br /&gt;
Windows 10 has a habit of automatically enabling '''video enhancements''' or unnecessary processing features, which can interfere with smooth playback. Therefore, please make sure these are disabled:&lt;br /&gt;
&lt;br /&gt;
Open Settings → System → Display → Graphics Settings.&lt;br /&gt;
If available, disable &amp;quot;Hardware-accelerated GPU scheduling&amp;quot; for critical timing experiments.&lt;br /&gt;
For specific applications (e.g., PsychoPy), under &amp;quot;Graphics Performance Preference,&amp;quot; set them to &amp;quot;High Performance&amp;quot; to ensure they use the dedicated GPU.&lt;br /&gt;
==Playback==&lt;br /&gt;
&lt;br /&gt;
=== PsychoPy ===&lt;br /&gt;
Example demonstrating how to play a video:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot; line&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3.10&lt;br /&gt;
# -*- coding: utf-8 -*-&lt;br /&gt;
&lt;br /&gt;
import time&lt;br /&gt;
import keyboard&lt;br /&gt;
from psychopy import visual &lt;br /&gt;
from psychopy import core&lt;br /&gt;
&lt;br /&gt;
## Setup Section&lt;br /&gt;
win = visual.Window([720,720], fullscr=False, monitor=&amp;quot;testMonitor&amp;quot;, units='cm')&lt;br /&gt;
&lt;br /&gt;
# append this stimulus to the list of prepared stimuli&lt;br /&gt;
vlc_movies = []&lt;br /&gt;
my_movies = ['YourMovie.mp4']#path to your movies from this directory&lt;br /&gt;
&lt;br /&gt;
for movie in my_movies:&lt;br /&gt;
    mov = visual.VlcMovieStim(win, movie,&lt;br /&gt;
    size=600,  # set as `None` to use the native video size&lt;br /&gt;
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location&lt;br /&gt;
    flipVert=False,  # flip the video picture vertically&lt;br /&gt;
    flipHoriz=False,  # flip the video picture horizontally&lt;br /&gt;
    loop=False,  # replay the video when it reaches the end&lt;br /&gt;
    autoStart=True)  # start the video automatically when first drawn&lt;br /&gt;
    vlc_movies.append(mov)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;playing video....&amp;quot;)&lt;br /&gt;
while not(keyboard.is_pressed('q')) and vlc_movies[0].status != visual.FINISHED:&lt;br /&gt;
    vlc_movies[0].draw()&lt;br /&gt;
    win.flip()&lt;br /&gt;
    buffer_in = vlc_movies[0].frameIndex&lt;br /&gt;
    print(vlc_movies[0].status)&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;Stop&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
## Closing Section&lt;br /&gt;
core.quit()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>P.dewater</name></author>
	</entry>
</feed>