Description
The standard subject input device is meant to provide a minimum level
of user input from all VRUT workstations. The minimum hardware configuration
of a sid is the following:
Methods:
sid.BUTTON1
sid.BUTTON2
sid.BUTTON3
...
sid.TRIGGER (same as BUTTON1)
For example, use the following code fragment to test whether the user is holding down button #2:
import sid
if sid.buttons() == sid.BUTTON2:
print "Button 2 is pressed"
Description
The video tracking sensor provides 3 DOF (position only) or 4 DOF (position
and azimuth) tracking of a single target (eg., a subject). The system
works by employing realtime video capture of point light sources in a darkened
environment and uses the data from dual cameras to triangulate the location
of these point light sources. The advantages of this system are that
it is not limited to a maximum tracking volume and has low latency ( <
30 ms). The resolution of the system is approximately 1:20,000 and
accuracy largely depends on the size of the tracking volume and the quality
of the calibration. Note that this system requires that special hardware
and special drivers be installed in the machine being used.
Usage
To enable Intersense tracking, add the sensor using the standard sensor
technique: s = addsensor('video'). To then enable default
head tracking behavior, follow the addsensor command with tracker()
to start tracking. Use the s.reset() command to zero the
observer's current location so that the measured coordinates are x=0 and
z=0.
There a several commands that can be used when the video plug-in is enabled which allow you to modify the operation of the tracking or query it for troubleshooting purposes.
------------- Commands ------------
1 | Set intensity threshold level | ARGUMENT REQUIRED. Set the minimum light intensity level that the sytem will consider a valid signal. This should be set well above the background noise level but not so high that the light source itself goes below threshold when at the maximum distance from the cameras. Use the camera statistics function to determine the min/max scene intesity values. Default value is 120. |
2 | Set movement threshold level | ARGUMENT REQUIRED. Set the minimum movement threshold that the system will register in pixel units (the standard video camera samples at 640 x 480 pixels). Setting this too low will cause the tracker to jitter at the noise level but setting this too high will quantize the tracking motions. Use the camera statistics function to determine the standard error in the cameras scan line locations and gauge the appropriate value to use. Default value is 0. |
3 | Activate azimuth sensing mode | Using dual lights (typically mounted on a backback and worn by the subject), this mode will calculate the azimuth extra azimuthal information and can be used to determine the body azimuthal direction in addition to the head orientation. When a valid 2 point light source object is not found, the sensor will cause the computer to beep. Default is off. |
4 | Set translation gain | ARGUMENT REQUIRED. Set the gain of the translation data sent to VRUT. For example, if you want to double the distance which the user moves about in the virtual environment, you'd want to set the gain to 2. Default value is 1.0. |
10 | Print video statistics | Display the mean calculated horizontal and vertical scan line locations for both camera and the standard error of 100 samples. The minimum and maximum intensity values in each camera is also displayed. |
11 | Save camera images to harddisk | Save the captured image of each camera to text data files on the harddisk. These files can either be then easily be opened in matlab using using the 'load' function and then displayed using the 'image' function. |
12 | Rotate cardinal directions | This command rotates the cardinal directions by 180 deg so that virtual North now points toward the video cameras. Default orientation is virtual North pointing away from the cameras. NOTE: If command 4 is used, this command must be issued afterwards in order for the effect to take hold. |
Commands 1 - 2 and 3 require that an argument be passed to the sensor using the following special method. Assuming s = addsensor('video') was used to load the sensor, then the argument is passed by suffixing it to the command value with a decimal point. More specifically, the desired argument value is first divided by 1000 and then added the the command "integer" value. For example, to set the lowest brightness value used in the calculations to be 120 (out of a possible range of 0 to 255) you would type s.command(1.120). The remaining commands simply take a number which causes a specific action to be performed, e.g., s.command(10).
Calibration
Follow this link
for an explanation on how to calibrate the video tracking system.
Troubleshooting:
Description
The family of Intersense devices use inertial navigation techniques
to compute the orientation of the user's head. The sensor itself
is extremely small (~ 1" cube), very accurate and has extremely low latencies.
It provides 3 DOF (orientation only) data in the form of Euler angles
(yaw, pitch, and roll). Both the IS-300 and the Intertrax are supported
automatically by this sensor plug-in. These devices are standard
serial port (RS-232) devices and can be plugged into any COM port and will
be found.
Usage
To enable Intersense tracking, add the sensor using the standard sensor
technique: s = addsensor('intersense'). To then enable default
head tracking behavior, follow the addsensor command with tracker()
to start tracking. Use the s.reset() command to zero the observer's
current azimuth to be set to virtual North (eg., yaw = 0).
Description
This mechanical goniometer is provide 6 DOF tracking data (orientation
and position). It has a very limited range of motion but is robust.
These devices are standard serial port (RS-232) devices and can be plugged
into any COM port and will be found.
Usage
To enable Intersense tracking, add the sensor using the standard sensor
technique: s = addsensor('shootingstar'). To then enable
default head tracking behavior, follow the addsensor command with
tracker()
to start tracking. Use the s.reset() command to zero all the observer's
orientation values (eg., yaw = pitch = roll = 0).