Implementation¶
Video acquisition module (pymanip.video
)¶
This module defines the Camera
abstract base class,
which implements common methods such as the live video preview, and higher
level simple methods to quickly set up a video recording. It also
defines common useful functions, and a simple extension of Numpy arrays to
hold metadata (such as frame timestamp).
-
class
pymanip.video.
Camera
[source]¶ This class is the abstract base class for all other concrete camera classes. The concrete sub-classes must implement the following methods:
acquisition_oneshot()
methodacquisition()
andacquisition_async()
generator methodsresolution
,name
andbitdepth
properties
The concrete sub-classes will also probably have to override the constructor method, and the enter/exit context manager method, as well as common property getters and setters:
set_exposure_time()
set_trigger_mode()
set_roi()
set_frame_rate()
It may also define specialized getters for the camera which support them:
set_adc_operating_mode()
: ADC operating modeset_pixel_rate()
: pixel rate sensor readout (in Hz)set_delay_exposuretime()
-
acquire_signalHandler
(*args, **kwargs)[source]¶ This method sends a stop signal to the
acquire_to_files_async()
method.
-
acquire_to_files
(*args, **kwargs)[source]¶ This method starts the camera, acquires images and saves them to the disk. It is a simple wrapper around the
pymanip.video.Camera.acquire_to_files_async()
asynchronous method. The parameters are identical.
-
async
acquire_to_files_async
(num, basename, zerofill=4, dryrun=False, file_format='png', compression=None, compression_level=3, verbose=True, delay_save=False, progressbar=True, initialising_cams=None, **kwargs)[source]¶ This asynchronous method starts the camera, acquires
num
images and saves them to the disk. It is a simple quick way to perform camera acquisition (one-liner in the user code).- Parameters
num (int) – number of frames to acquire
basename (str) – basename for image filenames to be saved on disk
zerofill (int, optional) – number of digits for the framenumber for image filename, defaults to 4
dryrun (bool, optional) – do the acquisition, but saves nothing (testing purposes), defaults to False
file_format (str, optional) – format for the image files, defaults to “png”. Possible values are “raw”, “npy”, “npy.gz”, “hdf5”, “png” or any other extension supported by OpenCV imwrite.
compression (str, optional) – compression option for HDF5 format (“gzip”, “lzf”), defaults to None.
compression_level (int, optional) – png compression level for PNG format, defaults to 3.
verbose (bool, optional) – prints information message, defaults to True.
delay_save (bool, optional) – records all the frame in RAM, and saves at the end. This is useful for fast framerates when saving time is too slow. Defaults to False.
progressbar (bool, optional) – use
progressbar
module to show a progress bar. Defaults to True.initialising_cams (set, optional) – None, or set of camera objects. This camera object will remove itself from this set, once it is ready to grab frames. Useful in the case of multi camera acquisitions, to determine when all cameras are ready to grab frames. Defaults to None.
- Returns
image_counter, frame_datetime
- Return type
list, list
The details of the file format are given in this table:
file_format
description
raw
native 16 bits integers, i.e. li16 (little-endian) on Intel CPUs
npy
numpy npy file (warning: depends on pickle format)
npy.gz
gzip compressed numpy file
hdf5
hdf5, with optional compression
png, jpg, tif
image format with opencv imwrite with optional compression level for PNG
Typical usage of the function for one camera:
async def main(): with Camera() as cam: counts, times = await cam.acquire_to_files_async(num=20, basename='img-') asyncio.run(main())
-
acquisition
(num=inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True)[source]¶ This generator method is the main method that sub-classes must implement, along with the asynchronous variant. It is used by all the other higher-level methods, and can also be used directly in user code.
- Parameters
num (int, or float("inf"), optional) – number of frames to acquire, defaults to float(“inf”).
timout – timeout for frame acquisition (in milliseconds)
raw (bool, optional) – if True, returns bytes from the camera without any conversion. Defaults to False.
initialising_cams (set, optional) – None, or set of camera objects. This camera object will remove itself from this set, once it is ready to grab frames. Useful in the case of multi camera acquisitions, to determine when all cameras are ready to grab frames. Defaults to None.
raise_on_timeout (bool, optional) – boolean indicating whether to actually raise an exception when timeout occurs
It starts the camera, yields
num
images, and closes the camera. It can be aborted when sent a true-truth value object. It then cleanly stops the camera and finally yields True as a confirmation that the stop_signal has been caught before returning. Sub-classes must therefore reads the possible stop_signal when yielding the frame, and act accordingly.The
MetadataArray
objects yielded by this generator use a shared memory buffer which may be overriden for the next frame, and which is no longer defined when the generator object is cleaned up. The users are responsible for copying the array, if they want a persistant copy.The typical code structure in sub-classes must be like this:
def acquisition(self, *args, **kwargs): # ... setup code ... while count < num: # ... grab frame code ... stop_signal = yield MetadataArray(frame, metadata={"counter": count, "timestamp": timestamp, } ) if stop_signal: break # ... cleanup code ... if stop_signal: yield True
User-level code will use the generator in this manner:
gen = cam.acquire() for frame in gen: # .. do something with frame .. if I_want_to_stop: clean = gen.send(True) if not clean: print('Warning generator not cleaned') # no need to break here because the gen will be automatically exhausted
-
async
acquisition_async
(num=inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True)[source]¶ This asynchronous generator method is similar to the
acquisition()
generator method, except asynchronous. So much so, that in the general case, the latter can be defined simply by yielding from this asynchronous generator (so that the code is written once for both use cases), i.e.from pymanip.asynctools import synchronize_generator def acquisition( self, num=np.inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True, ): yield from synchronize_generator( self.acquisition_async, num, timeout, raw, initialising_cams, raise_on_timeout, )
It starts the camera, yields
num
images, and closes the camera. It can stop yielding images by sending the generator object a true-truth value object. It then cleanly stops the camera and finally yields True as a confirmation that the stop_signal has been caught before returning. Sub-classes must therefore reads the possible stop_signal when yielding the frame, and act accordingly.The
MetadataArray
objects yielded by this generator use a shared memory buffer which may be overriden for the next frame, and which is no longer defined when the generator object is cleaned up. The users are responsible for copying the array, if they want a persistant copy.The user API is similar, except with asynchronous calls, i.e.
gen = cam.acquire_async() async for frame in gen: # .. do something with frame .. if I_want_to_stop: clean = await gen.asend(True) if not clean: print('Warning generator not cleaned') # no need to break here because the gen will be automatically exhausted
-
acquisition_oneshot
()[source]¶ This method must be implemented in the sub-classes. It starts the camera, grab one frame, stops the camera, and returns the frame. It is useful for testing purposes, or in cases where only one frame is desired between very long time delays. It takes no input parameters. Returns an “autonomous” array (the buffer is independant of the camera object).
- Returns
frame
- Return type
-
display_crosshair
()[source]¶ This method adds a centered crosshair for self-reflection to the live-preview window (qt backend only)
-
preview
(backend='cv', slice_=None, zoom=0.5, rotate=0)[source]¶ This methods starts and synchronously runs the live-preview GUI.
- Parameters
backend (str) – GUI library to use. Possible values: “cv” for OpenCV GUI, “qt” for PyQtGraph GUI.
slice (Iterable[int], optional) – coordinate of the region of interest to show, defaults to None
zoom (float, optional) – zoom factor, defaults to 0.5
rotate (float, optional) – image rotation angle, defaults to 0
-
async
preview_async_cv
(slice_, zoom, name, rotate=0)[source]¶ This method starts and asynchronously runs the live-preview with OpenCV GUI. The params are identical to the
preview()
method.
-
preview_cv
(slice_, zoom, rotate=0)[source]¶ This method starts and synchronously runs the live-preview with OpenCV GUI. It is a wrapper around the
pymanip.video.Camera.preview_async_cv()
method. The params are identical to thepreview()
method.
-
class
pymanip.video.
MetadataArray
[source]¶ Bases:
numpy.ndarray
This class extends Numpy array to allow for an additionnal metadata attribute.
-
metadata
¶ dictionnary attribute containing user-defined key-value pairs
-
-
pymanip.video.
save_image
(im, ii, basename, zerofill, file_format, compression, compression_level)[source]¶ This function is a simple general function to save an input image from the camera to disk.
- Parameters
im (
MetadataArray
) – input imageii (int) – frame number
basename (str) – file basename
zerofill (int) – number of digits for the frame number
file_format (str) – image file format on disk. Possible values are: “raw”, “npy”, “npy.gz”, “hdf5”, “png”, or a file extension that OpenCV imwrite supports
compression (str) – the compression argument “gzip” or “lzf” to pass to
h5py.create_dataset()
if file_format is “hdf5”compression_level (int) – the png compression level passed to opencv for the “png” file format