Welcome to pymanip’s documentation!
pymanip is the main package that we use for data acquisition and monitoring of
our experimental systems in the Convection group at
Laboratoire de physique de l’ENS de Lyon.
It can be seen as an extension
of the fluidlab
module, which it heavily uses.
It is available freely under the French
CECILL-B license
in the hope that it can be useful to others. But it is provided AS IS, without any warranty as to
its commercial value, its secured, safe, innovative or relevant nature.
Unlike FluidLab, pymanip does not garantee any long term stability, and may change the API in the future without warning. However, some parts of the pymanip module may eventually be integrated into FluidLab, once they are stable enough.
The pymanip module is a set of tools for data acquisition and data management. Its goals are the following:
management of experimental “sessions”, for storing and retriving data, and useful live tools for experimental monitoring over long times, such as live plot, automated emails, and remote access of the live data, and also simple interrupt signal management;
simplify access to FluidLab instrument classes;
experimental implementation of asynchroneous video acquisition and DAQ acquisition;
experimental extension of FluidLab interface and instrument classes with asynchroneous methods;
miscellaneous CLI tools for saved session introspection, live video preview, live oscilloscope and spectrum analyser-style DAQ preview, and VISA/GPIB scanning.
Installation
Dependencies
pymanip
requires FluidLab which it builds upon, as well as several third-party modules
to communicate with instruments, either indirectly through FluidLab, or directly. Not all
dependencies are declared in the requirements.txt file, because none of them are hard
dependencies. It is possible to use pymanip
or FluidLab with only a subset of these
dependencies.
Here is a dependence table, depending on what you want to do with the package. If a feature
is available through FluidLab, the table indicates the FluidLab submodule that we are
using.
pymanip module |
fluidlab module |
third-pary modules |
---|---|---|
gpib (linux-gpib python bindings), pymodbus, pyserial, pyvisa |
||
Similar to |
||
aiohttp, aiohttp_jinja2, PyQt5 (optional) |
||
|
We also have our own bindings for some external libs, such as pymanip.video.pixelfly
for PCO Library, pymanip.nisyscfg
for National Instruments NISysCfg library.
Download and install
We recommand to install FluidLab and pymanip
from the repositories, i.e. FluidLab from Heptapod and
pymanip
from GitHub, and to use the -e option of pip install to easily pull updates from the
repositories:
$ hg clone https://foss.heptapod.net/fluiddyn/fluidlab
$ cd fluidlab
$ python -m pip install -e .
$ cd ..
$ git clone https://github.com/jsalort/pymanip.git
$ cd pymanip
$ python -m pip install -e .
However, it is also possible to install from PyPI:
$ python -m pip install fluidlab pymanip
Full installation with conda
Of course, it is possible to install the module, and its dependencies any way you like. For the record, I write here the procedure that we have been using in our lab for all our experimental room computers, using Anaconda. I am not advocating that it is better than another method. It installs many packages that are not dependencies of pymanip or fluidlab, but that we use regularly. We install as many packages as possible from conda, so that pip installs as little dependencies as possible. We also use black, flake8 and pre-commit hooks for the git repository.
Our base environment is setup like this:
$ conda create -n py37 python=3.7
$ conda activate py37
$ conda install conda
$ conda install jupyterlab jupyter_console widgetsnbextension qtconsole spyder numpy matplotlib scipy
$ conda install h5py scikit-image opencv
$ conda install git
$ conda install cython numba aiohttp flake8 filelock flask markdown
$ python -m pip install --upgrade pip
$ python -m pip install PyHamcrest
$ python -m pip install clint pint aiohttp_jinja2
$ python -m pip install pyserial pydaqmx pyvisa pyvisa-py
$ python -m pip install pyqtgraph
$ python -m pip install llc black pre-commit
$ python -m pip install importlib_resources
Then fluiddyn, fluidimage, fluidlab and pymanip are installed from the repository, as indicated in the previous section. For the computer with video acquisition, the third-party library must first be installed, and then the corresponding third-party python package, as indicated in the table.
Asynchronous Session
The pymanip.asyncsession.AsyncSession
class provides tools to manage an
asynchronous experimental session. It is the main tool that we use to set up monitoring
of experimental systems, alongside FluidLab device management facilities.
It will manage the storage for the data, as well as
several asynchronous functions for use during the monitoring of the experimental system such as
live plot of monitored data, regular control email, and remote HTTP access to the live data by
human (connection from a web browser 1), or by a machine using the
pymanip.asyncsession.RemoteObserver
class.
It has methods to access the data for processing during the experiment, or post-processing
after the experiment is finished.
Read-only access to the asyncsession data can be achieved with the pymanip.asyncsession.SavedAsyncSession
class.
For synchronous session, one can still use the deprecated classes from pymanip.session
,
but these will no longer be updated, therefore the asynchronous session should now always be
preferred.
Data management
AsyncSession objects can store three kinds of data:
scalar variables monitored in time at possibly irregular time intervals. Scalar values of these variables are logged, in a “data logger” manner. They are suited to the monitoring of quantities over time when a regular measurement rate is not required. We call them “logged variables”. Tasks can save values by calling the
pymanip.asyncsession.AsyncSession.add_entry()
method, and they can be later retrieved by calling the following methodspymanip.asyncsession.AsyncSession.logged_variables()
,pymanip.asyncsession.AsyncSession.logged_variable()
,pymanip.asyncsession.AsyncSession.logged_data()
,pymanip.asyncsession.AsyncSession.logged_first_values()
,pymanip.asyncsession.AsyncSession.logged_last_values()
,pymanip.asyncsession.AsyncSession.logged_data_fromtimestamp()
, or by using the sesn[varname] syntax shortcut which is equivalent to thelogged_variable()
method.scalar parameter defined once in the session. We call them “parameters”. A program can save parameters with the
pymanip.asyncsession.AsyncSession.save_parameter()
method, and they can be later retrieved by calling thepymanip.asyncsession.AsyncSession.parameter()
,pymanip.asyncsession.AsyncSession.parameters()
,pymanip.asyncsession.AsyncSession.has_parameter()
methods.non-scalar variables monitored in time at possibly irregular time intervals. A non-scalar value is typically a numpy array from an acquisition card or a frame from a camera. We call them “datasets”. They can be saved with the
pymanip.asyncsession.AsyncSession.add_dataset()
method, and later be retrieved by thepymanip.asyncsession.AsyncSession.dataset()
,pymanip.asyncsession.AsyncSession.datasets()
,pymanip.asyncsession.AsyncSession.dataset_names()
,pymanip.asyncsession.AsyncSession.dataset_last_data()
,pymanip.asyncsession.AsyncSession.dataset_times()
methods.
Task management
The main entry point for an experimental session is the pymanip.asyncsession.AsyncSession.monitor()
function which should be awaited from the main function of the program. The typical main structure of a program is then:
async def monitoring_task(sesn):
some_value = await sesn.some_device.aget()
sesn.add_entry(some_value=some_value)
await sesn.sleep(10)
async def main(sesn):
async with SomeDevice() as sesn.some_device:
await sesn.monitor(monitoring_task)
with AsyncSession() as sesn:
asyncio.run(main(sesn))
In this example, we see the use of the context managers, both for the AsyncSession
object,
and for the instrument object. The main experimental measurement lies in the monitoring_task()
function, which should not be an explicit for loop. Indeed, the sesn.monitor()
method will implement the for loop, with checks for the interruption signal.
Possible additionnal initialisation of devices can be added to the main()
function. Possible additionnal initalisation of session variables can be added in the with AsyncSession() block.
Note that all functions take the session object as its first argument. It is therefore possible to write the same code as a subclass of AsyncSession
, but this is not strictly necessary, i.e.
class Monitoring(AsyncSession):
async def monitoring_task(self):
some_value = await self.some_device.aget()
self.add_entry(some_value=some_value)
await self.sleep(10)
async def main(self):
async with SomeDevice() as self.some_device:
await self.monitor(self.monitoring_task)
with Monitoring() as mon:
asyncio.run(mon.main())
The benefits of the asynchronous structure of the example program is clearer when plotting tasks,
and email tasks are added, or when there are several concurrent monitoring tasks. The main()
function may then become:
async def main(sesn):
async with SomeDevice() as sesn.some_device:
await sesn.monitor(monitoring_task_a,
monitoring_task_b,
sesn.plot(['var_a', 'var_b', 'var_c']),
sesn.send_email(from_addr='toto@titi.org',
to_addrs='tata@titi.org',
delay_hours=2.0),
)
The useful pre-defined tasks are pymanip.asyncsession.AsyncSession.send_email()
, pymanip.asyncsession.AsyncSession.plot()
and pymanip.asyncsession.AsyncSession.sweep()
.
Remote access
The pymanip.asyncsession.AsyncSession.monitor()
method will set up a HTTP server for remote
access to the live data. The server can be reached from a web browser, or from another script
with an instance of the pymanip.asyncsession.RemoteObserver
class.
The usage of the RemoteObserver class is straightforward:
observer = RemoteObserver('remote-computer.titi.org')
observer.start_recording()
# ... some time consuming task ...
data = observer.stop_recording()
data is a dictionnary which contains all the scalar variables saved on remote-computer.titi.org during the time consuming task.
Implementation
Asynchronous Session Module (pymanip.asyncsession
)
This module defines two classes for live acquisition, AsyncSession
and RemoteObserver
. The former is used to
manage an experimental session, the latter to access its live data from a
remote computer. There is also one class for read-only access to previous session,
SavedAsyncSession
.
- class pymanip.asyncsession.AsyncSession(session_name=None, verbose=True, delay_save=False, exist_ok=True, readonly=False, database_version=-1)[source]
This class represents an asynchronous experiment session. It is the main tool that we use to set up monitoring of experimental systems. It will manage the storage for the data, as well as several asynchronous functions for use during the monitoring of the experimental system such as live plot of monitored data, regular control email, and remote HTTP access to the live data by human (connection from a web browser), or by a machine using the
pymanip.asyncsession.RemoteObserver
class. It has methods to access the data for processing during the experiment, or post-processing after the experiment is finished.- Parameters
session_name (str, optional) – the name of the session, defaults to None. It will be used as the filename of the sqlite3 database file stored on disk. If None, then no file is created, and data will be temporarily stored in memory, but will be lost when the object is released.
verbose (bool, optional) – sets the session verbosity, defaults to True
delay_save (bool, optional) – if True, the data is stored in memory during the duration of the session, and is saved to disk only at the end of the session. It is not recommanded, but useful in cases where fast operation requires to avoid disk access during the session.
- add_dataset(*args, **kwargs)[source]
This method adds arrays, or other pickable objects, as “datasets” into the database. They will hold a timestamp corresponding to the time at which the method has been called.
- add_entry(*args, **kwargs)[source]
This methods adds scalar values into the database. Each entry value will hold a timestamp corresponding to the time at which this method has been called. Variables are passed in dictionnaries or as keyword-arguments. If several variables are passed, then they all hold the same timestamps.
For parameters which consists in only one scalar value, and for which timestamps are not necessary, use
pymanip.asyncsession.AsyncSession.save_parameter()
instead.
- ask_exit(*args, **kwargs)[source]
This methods informs all tasks that the monitoring session should stop. Call this method if you wish to cleanly stop the monitoring session. It is also automatically called if the interrupt signal is received. It essentially sets the
running
attribute to False. Long user-defined tasks should check therunning
attribute, and abort if set to False. All other AsyncSession tasks check the attribute and will stop. Thesleep()
method also aborts sleeping.
- dataset(name, ts=None, n=None)[source]
This method returns the dataset recorded at the specified timestamp, and under the specified name.
- Parameters
- Returns
the value of the recorded dataset
- Return type
- dataset_last_data(name)[source]
This method returns the last recorded dataset under the specified name.
- dataset_names()[source]
This method returns the names of the datasets currently stored in the session database.
- Returns
names of datasets
- Return type
- dataset_times(name)[source]
This method returns the timestamp of the recorded dataset under the specified name.
- Parameters
name (str) – name of the dataset to retrieve
- Returns
array of timestamps
- Return type
numpy.ndarray
- datasets(name)[source]
This method returns a generator which will yield all timestamps and datasets recorded under the specified name. The rationale for returning a generator instead of a list, is that each individual dataset may be large.
- Parameters
name (str) – name of the dataset to retrieve
- Exemple
To plot all the recorded datasets named ‘toto’
>>> for timestamp, data in sesn.datasets('toto'): >>> plt.plot(data, label=f'ts = {timestamp-sesn.t0:.1f}')
To retrieve a list of all the recorded datasets named ‘toto’
>>> datas = [d for ts, d in sesn.datasets('toto')]
- async figure_gui_update()[source]
This method returns an asynchronous task which updates the figures created by the
pymanip.asyncsession.AsyncSession.plot()
tasks. This task is added automatically, and should not be used manually.
- has_metadata(name)[source]
This method returns True if the specified metadata exists in the session database.
- has_parameter(name)[source]
This method returns True if specified parameter exists in the session database.
- property initial_timestamp
Session creation timestamp, identical to
pymanip.asyncsession.AsyncSession.t0
- property last_timestamp
Timestamp of the last recorded value
- logged_data()[source]
This method returns a name-value dictionnary containing all scalar variables currently stored in the session database.
- Returns
all scalar variable values
- Return type
- logged_data_fromtimestamp(name, timestamp)[source]
This method returns the timestamps and values of a given scalar variable, recorded after the specified timestamp.
- logged_first_values()[source]
This method returns a dictionnary holding the first logged value of all scalar variables stored in the session database.
- Returns
first values
- Return type
- logged_last_values()[source]
This method returns a dictionnary holding the last logged value of all scalar variables stored in the session database.
- Returns
last logged values
- Return type
- logged_variable(varname)[source]
This method retrieve the timestamps and values of a specified scalar variable. It is possible to use the sesn[varname] syntax as a shortcut.
- Parameters
varname (str) – name of the scalar variable to retrieve
- Returns
timestamps and values
- Return type
tuple (timestamps, values) of numpy arrays.
- Exemple
>>> ts, val = sesn.logged_variable('T_Pt_bas')
- logged_variables()[source]
This method returns a set of the names of the scalar variables currently stored in the session database.
- Returns
names of scalar variables
- Return type
- async monitor(*tasks, server_port=6913, custom_routes=None, custom_figures=None, offscreen_figures=False)[source]
This method runs the specified tasks, and opens a web-server for remote access and set up the tasks to run matplotlib event loops if necessary. This is the main method that the main function of user program should await for. It is also responsible for setting up the signal handling and binding it to the ask_exit method.
It defines a
running
attribute, which is finally set to False when the monitoring must stop. User can use theask_exit()
method to stop the monitoring. Time consuming user-defined task should check therunning
and abort if set to False.- Parameters
*tasks (co-routine function or awaitable) – asynchronous tasks to run: if the task is a co-routine function, it will be called repeatedly until ask_exit is called. If task is an awaitable it is called only once. Such an awaitable is responsible to check that ask_exit has not been called. Several such awaitables are provided:
pymanip.asyncsession.AsyncSession.send_email()
,pymanip.asyncsession.AsyncSession.plot()
andpymanip.asyncsession.AsyncSession.sweep()
.server_port (int, optional) – the network port to open for remote HTTP connection, defaults to 6913. If None, no server is created.
custom_routes (co-routine function, optional) – additionnal aiohttp routes for the web-server, defaults to None
custom_figures (
matplotlib.pyplot.Figure
, optional) – additional matplotlib figure object that needs to run the matplotlib event loopoffscreen_figures (bool, optional) – if set, figures are not shown onscreen
- async mytask(corofunc)[source]
This method repeatedly awaits the given co-routine function, as long as
pymanip.asyncsession.AsyncSession.ask_exit()
has not been called. Should not be called manually.
- parameters()[source]
This method returns all parameter name and values.
- Returns
parameters
- Return type
- async plot(varnames=None, maxvalues=1000, yscale=None, *, x=None, y=None, fixed_ylim=None, fixed_xlim=None)[source]
This method returns an asynchronous task which creates and regularly updates a plot for the specified scalar variables. Such a task should be passed to
pymanip.asyncsession.AsyncSession.monitor()
orpymanip.asyncsession.AsyncSession.run()
, and does not have to be awaited manually.If varnames is specified, the variables are plotted against time. If x and y are specified, then one is plotted against the other.
- Parameters
varnames (list or str, optional) – names of the scalar variables to plot
maxvalues (int, optional) – number of values to plot, defaults to 1000
yscale (tuple or list, optional) – fixed yscale for temporal plot, defaults to automatic ylim
x (str, optional) – name of the scalar variable to use on the x axis
y (str, optional) – name of the scalar variable to use on the y axis
fixed_xlim (tuple or list, optional) – fixed xscale for x-y plots, defaults to automatic xlim
fixed_ylim (tuple or list, optional) – fixed yscale for x-y plots, defaults to automatic ylim
- print_welcome()[source]
Prints informative start date/end date message. If verbose is True, this method is called by the constructor.
- run(*tasks, server_port=6913, custom_routes=None, custom_figures=None, offscreen_figures=False)[source]
Synchronous call to
pymanip.asyncsession.AsyncSession.monitor()
.
- save_database()[source]
This method is useful only if delay_save = True. Then, the database is kept in-memory for the duration of the session. This method saves the database on the disk. A new database file will be created with the content of the current in-memory database.
This method is automatically called at the exit of the context manager.
- save_parameter(*args, **kwargs)[source]
This method saves a scalar parameter into the database. Unlike scalar values saved by the
pymanip.asyncsession.AsyncSession.add_entry()
method, such parameter can only hold one value, and does not have an associated timestamp. Parameters can be passed as dictionnaries, or keyword arguments.
- save_remote_data(data)[source]
This method saves the data returned by a
pymanip.asyncsession.RemoteObserver
object into the current session database, as datasets and parameters.- Parameters
data (dict) – data returned by the
pymanip.asyncsession.RemoteObserver
object
- async send_email(from_addr, to_addrs, host, port=None, subject=None, delay_hours=6, initial_delay_hours=None, use_ssl_submission=False, use_starttls=False, user=None, password=None)[source]
This method returns an asynchronous task which sends an email at regular intervals. Such a task should be passed to
pymanip.asyncsession.AsyncSession.monitor()
orpymanip.asyncsession.AsyncSession.run()
, and does not have to be awaited manually.- Parameters
from_addr (str) – email address of the sender
host (str) – hostname of the SMTP server
port (int, optional) – port number of the SMTP server, defaults to 25
delay_hours (float, optional) – interval between emails, defaults to 6 hours
initial_delay_hours (float, optional) – time interval before the first email is sent, default to None (immediately)
- async server_current_ts(request)[source]
This asynchronous method returns the HTTP response to a request for JSON with the current server time. Should not be called manually.
- async server_data_from_ts(request)[source]
This asynchronous method returns the HTTP response to a request for JSON with all data after the specified timestamp. Should not be called manually.
- async server_get_parameters(request)[source]
This asynchronous method returns the HTTP response to a request for JSON data of the session parameters. Should not be called manually.
- async server_logged_last_values(request)[source]
This asynchronous method returns the HTTP response to a request for JSON data of the last logged values. Should not be called manually.
- async server_main_page(request)[source]
This asynchronous method returns the HTTP response to a request for the main html web page. Should not be called manually.
- async server_plot_page(request)[source]
This asynchronous method returns the HTTP response to a request for the HTML plot page. Should not be called manually.
- async sleep(duration, verbose=True)[source]
This method returns an asynchronous task which waits the specified amount of time and prints a countdown. This should be called with verbose=True by only one of the tasks. The other tasks should call with verbose=False. This method should be preferred over asyncio.sleep because it checks that
pymanip.asyncsession.AsyncSession.ask_exit()
has not been called, and stops waiting if it has. This is useful to allow rapid abortion of the monitoring session.
- async sweep(task, iterable)[source]
This methods returns an asynchronous task which repeatedly awaits a given co-routine by iterating the specified iterable. The returned asynchronous task should be passed to
pymanip.asyncsession.AsyncSession.monitor()
orpymanip.asyncsession.AsyncSession.run()
, and does not have to be awaited manually.This should be used when the main task of the asynchronous session is to sweep some value. The asynchronous session will exit when all values have been iterated. This is similar to running a script which consists in a synchronous for-loop, except that other tasks, such as remote access, live-plot and emails can be run concurrently.
- Parameters
task (function) – the co-routine function to repeatedly call and await
iterable (list) – values to pass when calling the function
- Example
>>> async def balayage(sesn, voltage): >>> await sesn.generator.vdc.aset(voltage) >>> await asyncio.sleep(5) >>> response = await sesn.multimeter.aget(channel) >>> sesn.add_entry(voltage=voltage, response=response) >>> >>> async def main(sesn): >>> await sesn.monitor(sesn.sweep(balayage, [0.0, 1.0, 2.0])) >>> >>> asyncio.run(main(sesn))
- property t0
Session creation timestamp
- class pymanip.asyncsession.RemoteObserver(host, port=6913)[source]
This class represents remote observers of a monitoring session. It connects to the server opened on a remote computer by
pymanip.asyncsession.AsyncSession.monitor()
. The aim of an instance of RemoteObserver is to retrieve the data saved into the remote computer session database.- Parameters
- _post_request(apiname, params)[source]
Private method to send a POST request for the specified API name and params
- get_last_values()[source]
This method retrieve the last set of values from the remote monitoring session.
- Returns
scalar variable last recorded values
- Return type
- start_recording()[source]
This method establishes the connection to the remote computer, and sets the start time for the current observation session.
- stop_recording(reduce_time=True, force_reduce_time=True)[source]
This method retrieves all scalar variable data recorded saved by the remote computer since
pymanip.asyncsession.RemoteObserver.start_recording()
established the connection.- Parameters
reduce_time (bool, optional) – if True, try to collapse all timestamp arrays into a unique timestamp array. This is useful if the remote computer program only has one call to add_entry. Defaults to True.
force_reduce_time (bool, optional) – bypass checks that all scalar values indeed have the same timestamps.
- Returns
timestamps and values of all data saved in the remote computed database since the call to
pymanip.asyncsession.RemoteObserver.start_recording()
- Return type
- 1
The default port is 6913, but it can be changed, or turned off by passing appropriate argument to
pymanip.asyncsession.AsyncSession.monitor()
.
Instrument drivers
The instrument drivers in pymanip
are directly those of fluidlab.instruments
.
An instance object represents an actual physical instrument, and the features that can be
read or set are represented as instance attributes. These attributes have consistant names
for all our instrument drivers, summarized in the table below
Physical measurement |
Attribute name |
---|---|
DC Voltage |
vdc |
AC Voltage |
vrms |
DC Current |
idc |
AC Current |
irms |
2-wire impedence |
ohm |
4-wire impedence |
ohm_4w |
Signal phase shift |
angle |
Frequency |
freq |
On Off switch |
onoff |
Pressure |
pressure |
Temperature |
temperature |
Setpoint |
setpoint |
Some device may have specific feature name in special cases, but we try to keep using similar
names for similar features.
Each feature then can be accessed by get()
and set()
methods, as appropriate. The
get()
will query the instrument for the value. The set()
method will set the value.
It is a design choice to use getters and setters, instead of python properties, to make the
actual communication command more explicit.
For example to read a voltage on a multimeter:
Vdc = multimeter.vdc.get()
And to set the voltage setpoint to 1 volt on a power supply:
powersupply.vdc.set(1.0)
Unless otherwise specified, vdc.get()
will always read an actual voltage, and not
the voltage setpoint. This is a design choice because we think the users should know what setpoint
they have set in the general case. In case it is necessary to actually query the instrument for
its setpoint, an additionnal setpoint
attribute may be defined.
The implementation details of the instrument drivers, and how they are mixed with the interface
and feature classes is described in the fluidlab.instruments
module documentation.
pymanip.instruments
defines shortcut classes, as well as an asynchronous extension to
the fluidlab.instruments
classes.
Shortcuts to FluidLab Instrument classes
The instrument classes are the basic objects that we use to communicate with various
scientific instruments. They are implemented in the FluidLab project in the fluidlab.instruments
module.
The pymanip.instruments
module, along with the list_instruments cli command are simple tools, designed to simplify access to these classes.
Indeed, each instrument class in FluidLab is defined in separate sub-modules, which can result in long and convoluted import statements, such as
from fluidlab.instruments.chiller.lauda import Lauda
from fluidlab.instruments.multiplexer.agilent_34970a import Agilent34970a
from fluidlab.instruments.motor_controller.newport_xps_rl import NewportXpsRL
The pymanip.instruments
module simplifies these import statements and allows to write
from pymanip.instruments import Lauda, Agilent34970a, NewportXpsRL
This is of course less efficient because this means that all instruments classes are actually loaded, but it makes the script easier to write and read.
The names of the available instrument classes can be conveniently obtained from the command line
$ python -m pymanip list_instruments
Instrument classes
Instruments module (pymanip.instruments
)
This module auto-imports all the instruments classes from fluidlab.instruments
:
|
Driver for Lock-in Amplifier Stanford 830. |
|
Driver for the multiplexer Agilent 34970A. |
|
Driver for the multiplexer Keithley 2700 Series |
|
Driver for the Lakeshore Model 224 Temperature Monitor |
|
Driver for the multiplexer Keithley 705* |
|
|
|
Driver for the multimeter HP 34401a. |
|
|
|
|
|
Driver for the sourcemeter Keithley 2400. |
|
Driver for the power supply Agilent 6030a |
|
Driver for the power supply IPS 2303S. |
|
Driver for the power supply TDK Lambda |
|
Driver for the power supply TTI CPX400DP Dual 420 watt DC Power Supply |
|
A driver for the power supply HP_6653A |
|
Driver for the power supply Xantrex XDC300 |
|
A driver for the function generator Agilent 33220A |
|
Minimal implementation of a driver for the Agilent 33500B. |
|
Driver for the function generator Hewlett-Packard 33120A |
|
A driver for the function generator Tektronix AFG 3022 B. |
|
A driver for the function generator Thurlby Thandar Instruments (TTI) TSX3510P. |
|
A driver for the ultra low distorsion wave generator DS360 |
|
|
|
|
|
Driver for the oscilloscope Agilent DSOX2014a. |
|
|
|
|
|
Asynchronous Extension of FluidLab Instrument Classes
FluidLab Instrument classes, which can be accessed from the pymanip.instruments
module are written in a synchronous manner, i.e. all calls are blocking. However, most of the time, the program essentially waits for the instrument response.
While most scientific instruments are not designed for concurrent requests, and most library, such as National Instruments VISA, are not thread-safe, it is still desirable to have an asynchronous API for the instrument classes. This allows to keep other tasks running while waiting for (possibly long) delays. The other tasks include: communication with other types of devices on other types of boards, refreshing the GUI and responding to remote access requests.
This asynchronous extension is implemented in the pymanip.aioinstruments
and pymanip.interfaces.aiointer
modules. They define subclasses of FluidLab interface, feature are instrument classes. This is currently highly experimental, so please use it at your own risk. However, it may eventually be merged into FluidLab when it gets more mature.
From the user perspective, the usage of asynchronous instruments is relatively straightforward for those who already know how to use FluidLab Instrument classes.
The synchronous program
from pymanip.instruments import Agilent34970a
def main():
with Agilent34970a('GPIB0::9::INSTR') as a:
R1, R2 = a.ohm_4w.get((101, 102))
return R1, R2
becomes
from pymanip.aioinstruments import AsyncAgilent34970a
async def main():
async with AsyncAgilent34970a('GPIB0::9::INSTR') as a:
R1, R2 = await a.ohm_4w.aget((101, 102))
return R1, R2
The asynchronous instrument subclasses all have the “Async” prefix in their names. Asynchronous context manager must be used instead of the classical context manager because some specific initialisation may be done in the asynchronous interface __aenter__()
method. All the features have the same name as in the synchronous class, but they have aget()
and aset()
co-routine methods instead of get()
and set()
methods.
Asynchronous instruments implementation
Asynchronous extension of fluidlab.interfaces.QueryInterface
(pymanip.interfaces.aiointer
)
This module defines AsyncQueryInterface
as the default subclass for fluidlab.interfaces.QueryInterface
.
The default implementation simply runs the methods of QueryInterface
into an executor, therefore in a separate thread. Because the parent class is probably not thread-safe, each call is protected by a lock to prevent concurrent calls to the instrument. However, the main thread is released and other tasks can be run on other instruments.
The methods that are defined in this way are __aenter__()
, __aexit__()
, _aread()
, _awrite()
, _aquery()
and await_for_srq()
. The higher lever co-routine methods aread()
, awrite()
and aquery()
simply checks that the interface is opened, and then awaits the low-level method, in a similar fashion as in the QueryInterface
class.
Therefore, concrete subclass such as pymanip.interfaces.aioserial.AsyncSerialInterface
, or pymanip.interfaces.aiovisa.AsyncVISAInterface
only have to override the low-level co-routine methods (if necessary).
- class pymanip.interfaces.aiointer.AsyncQueryInterface[source]
This class represents an asynchronous Query interface. It is a subclass of the synchronous QueryInterface defined in FluidLab. The input parameters are those of QueryInterface.
Concrete subclasses may replace the
lock
attribute, and replace it by a global board lock if necessary.- async _aquery(command, time_delay=0.1, **kwargs)[source]
Low-level co-routine to write/read a query. This method does not check whether the interface is opened. There are two cases:
- async _aread(*args, **kwargs)[source]
Low-level co-routine to read data from the instrument. This method does not check whether the interface is opened. In this basic class, it simply acquires the interface lock and runs the
_read()
method in an executor.
- async _awrite(*args, **kwargs)[source]
Low-level co-routine to send data to the instrument. This method does not check whether the interface is opened. In this basic class, it simply acquires the interface lock and runs the
_write()
method in an executor.
- async aquery(command, time_delay=0.1, **kwargs)[source]
This co-routine method queries the instrument. The parameters are identical to those of the
query()
method.
Asynchronous extension of SerialInterface
(pymanip.interfaces.aioserial
)
This module defines AsyncSerialInterface
as a subclass of fluidlab.interfaces.serial_inter.SerialInterface
and
pymanip.interface.aiointer.AsyncQueryInterface
.
- class pymanip.interfaces.aioserial.AsyncSerialInterface(port, baudrate=9600, bytesize=8, parity='N', stopbits=1, timeout=1, xonxoff=False, rtscts=False, dsrdtr=False, eol=None, multilines=False, autoremove_eol=False)[source]
This class is an asynchronous extension of
fluidlab.interfaces.serial_inter.SerialInterface
. It inherits all its methods from the parent classes.
Asynchronous extension of fluidlab.interfaces.VISAInterface
(pymanip.interfaces.aiovisa
)
This module defines AsyncVISAInterface
as a subclass of fluidlab.interfaces.visa_inter.VISAInterface
and
pymanip.interface.aiointer.AsyncQueryInterface
.
- class pymanip.interfaces.aiovisa.AsyncVISAInterface(resource_name, backend=None)[source]
This class is an asynchronous extension of
fluidlab.interfaces.visa_inter.VISAInterface
. The parameters are the same as those of thefluidlab.interfaces.visa_inter.VISAInterface
class.
Asynchronous instruments module (pymanip.aioinstruments
)
This module auto-imports all the asynchronous instrument classes.
Asynchronous Instrument drivers (pymanip.aioinstruments.aiodrivers
)
This module defines a subclass of fluidlab.instruments.drivers.Driver
where
the QueryInterface attribute is replaced by the corresponding AsyncQueryInterface
instance. The asynchronous context manager is bound to the interface asynchronous context
manager.
- pymanip.aioinstruments.aiodrivers.interface_from_string(name, default_physical_interface=None, **kwargs)[source]
This function is similar to
fluidlab.interfaces.interface_from_string()
except that it returns an instance ofAsyncQueryInterface
instead ofQueryInterface
.
- class pymanip.aioinstruments.aiodrivers.AsyncDriver(interface=None)[source]
This class is an asynchronous extension of
fluidlab.instruments.drivers.Driver
.
Asynchronous Instrument features (pymanip.aioinstruments.aiofeatures
)
Asynchronous extension of fluidlab instrument features. The main difference is that they
define aget()
and aset()
co-routine methods. The original get()
and
set()
are not overridden, and may still be used.
- class pymanip.aioinstruments.aiofeatures.AsyncQueryCommand(name, doc='', command_str='', parse_result=None)[source]
- class pymanip.aioinstruments.aiofeatures.AsyncValue(name, doc='', command_set=None, command_get=None, check_instrument_value=True, pause_instrument=0.0, channel_argument=False)[source]
- class pymanip.aioinstruments.aiofeatures.AsyncNumberValue(name, doc='', command_set=None, command_get=None, limits=None, check_instrument_value=True, pause_instrument=0.0, channel_argument=False)[source]
- class pymanip.aioinstruments.aiofeatures.AsyncFloatValue(name, doc='', command_set=None, command_get=None, limits=None, check_instrument_value=True, pause_instrument=0.0, channel_argument=False)[source]
- class pymanip.aioinstruments.aiofeatures.AsyncBoolValue(name, doc='', command_set=None, command_get=None, check_instrument_value=True, pause_instrument=0.0, channel_argument=False, true_string='1', false_string='0')[source]
Asynchronous IEC60488 instrument driver (pymanip.aioinstruments.aioiec60488
)
This module defines an asynchronous subclass of
fluidlab.instruments.iec60488.IEC60488
.
- class pymanip.aioinstruments.aioiec60488.AsyncIEC60488(interface=None)[source]
- aclear_status = <function AsyncWriteCommand._build_driver_class.<locals>.func>
- aquery_esr = <function AsyncQueryCommand._build_driver_class.<locals>.func>
- aquery_stb = <function AsyncQueryCommand._build_driver_class.<locals>.func>
- aquery_identification = <function AsyncQueryCommand._build_driver_class.<locals>.func>
- areset_device = <function AsyncWriteCommand._build_driver_class.<locals>.func>
- aperform_internal_test = <function AsyncQueryCommand._build_driver_class.<locals>.func>
- await_till_completion_of_operations = <function AsyncWriteCommand._build_driver_class.<locals>.func>
- aget_operation_complete_flag = <function AsyncQueryCommand._build_driver_class.<locals>.func>
- await_continue = <function AsyncQueryCommand._build_driver_class.<locals>.func>
- async aclear_status()
Clears the data status structure
- async aget_operation_complete_flag()
Get operation complete flag
- async aperform_internal_test()
Perform internal self-test
- async aquery_esr()
Query the event status register
- async aquery_identification()
Identification query
- async aquery_stb()
Query the status register
- async areset_device()
Perform a device reset
- async await_continue()
Wait to continue
- async await_till_completion_of_operations()
Return “1” when all operation are completed
- event_status_enable_register = <pymanip.aioinstruments.aiofeatures.AsyncRegisterValue object>
- status_enable_register = <pymanip.aioinstruments.aiofeatures.AsyncRegisterValue object>
Video acquisition
The pymanip.video
module provides tools to help with camera acquisition. We
use the third-party pymba
module bindings to the AVT Vimba SDK for
AVT cameras, the third-party pyAndorNeo
module bindings to the
Andor SDK3 library for the Andor Camera, the third-party pyueye
module
bindings to the IDS ueye library, the Python module provided by Ximea for the
Ximea cameras, and PyVCAM wrapper for Photometrics camera.
We wrote our own bindings to the Pixelfly library for the PCO camera. Beware that the code works for us, but there is no garantee that it will work with your camera models.
The idea was for us to be able to switch cameras, without having to change much of
the acquisition code. So we define an abstract pymanip.video.Camera
base
class, and all concrete sub-classes follow the exact same user API. The methods allow
to start video acquisition, in a manner consistent with our needs, and also provides
a unified live preview API.
It also makes it relatively straightforward to do simultaneous acquisition on several
cameras, even if they are totally different models and brands and use different
underlying libraries.
The useful concrete classes are given in this table:
Camera type |
Concrete class |
---|---|
AVT |
|
PCO |
|
Andor |
|
IDS |
|
Ximea |
|
Photometrics |
|
They all are sub-classes of the pymanip.video.Camera
abstract base
class. Most of the user-level useful documentation lies in the base class.
Indeed, all the concrete implementation share the same API, so their internal
methods are implementation details.
In addition, a high-level class, pymanip.video.session
is provided to
build simple video acquisition scripts, with possible concurrent cameras trigged
by a function generator.
Simple usage
Context manager
First of all, the Camera
object uses context manager
to ensure proper opening and closing of the camera connection. This is
true for all the methods, synchronous or asynchronous.
Therefore, all our example will have a block like this:
from pymanip.video.avt import AVT_Camera as Camera
with Camera() as cam:
# ... do something with the camera ...
And in all cases, switching to another camera, for example to a PCO camera, only requires to change the import statement, e.g.
from pymanip.video.pco import PCO_Camera as Camera
with Camera() as cam:
# ... do something with the camera ...
Simple high-level acquisition function
The easiest way to do an image acquisition with pymanip
is to use the
high-level acquire_to_files()
method. It is a
one-liner that will start the camera, acquire the desired number of frames,
and save them on the disk. It is enough for very simple acquisition programs.
Parameters of the acquisition can be set with dedicated methods beforehands,
such as,
set_exposure_time()
set_trigger_mode()
set_roi()
set_frame_rate()
The advantage over direct calls to modules like pymba
or AndorNeo
is that it is straightforward to switch camera, without changing the user-level
acquisition code.
A simple acquisition script, for use with an external GBF clock, would be:
import numpy as np
from pymanip.video.avt import AVT_Camera as Camera
acquisition_name = "essai_1"
nframes = 3000
with Camera() as cam:
cam.set_trigger_mode(True) # set external trigger
count, dt = cam.acquire_to_files(
nframes,
f"{acquisition_name:}/img",
dryrun=False,
file_format="png",
compression_level=9,
delay_save=True,
)
dt_avg = np.mean(t[1:] - t[:-1])
print("Average:", 1.0 / dt_avg, "fps")
The returned image is an instance of MetadataArray
, which is an extension of
numpy.ndarray
with an additionnal metadata
attribute.
When possible, the Camera
concrete subclasses set this metadata
attribute with two key-value pairs:
“timestamp”;
“counter”.
The “timestamp” key is the frame timestamp in camera clock time. The “counter” key is the frame number.
Generator method
It is sometimes desirable to have more control over what to do with the
frames. In this case, we can use the acquisition()
generator method. The parameters are similar to the
acquire_to_files()
method, except that the frame
will be yielded by the generator, and the user is responsible to do the
processing and saving.
The previous example can be rewritten like this:
import numpy as np
import cv2
from pymanip.video.avt import AVT_Camera as Camera
acquisition_name = "essai_1"
nframes = 3000
compression_level = 9
params = (cv2.IMWRITE_PNG_COMPRESSION, compression_level)
t = np.zeros((nframes,))
with Camera() as cam:
for i, frame in enumerate(cam.acquisition(nframes)):
filename = f"{acquisition_name:}/img-{i:04d}.png"
cv2.imwrite(filename, frame, params)
t[i] = frame.metadata["timestamp"].timestamp()
dt_avg = np.mean(t[1:] - t[:-1])
print("Average:", 1.0 / dt_avg, "fps")
Of course, the advantage of the generator method is more apparent when you
want to do more than what the acquire_to_files()
does.
Asynchronous acquisition
Simple asynchronous acquisition
The video recording may be used as part of a larger experimental program. In particular, you may want to keep monitoring the experiment conditions while recording the video, and possibly save the experiment parameters next to the video frames. The simplest way to achieve that is to implement the monitoring task and the video recording task as asynchronous functions.
The very simple acquire_to_files_async()
method
is sufficient for very basic cases. The usage is strictly similar to
the synchronous acquire_to_files()
method described
in the previous section. In fact, the synchronous method is only a wrapper
around the asynchronous method.
The simple example of the previous section can be rewritten like this:
import asyncio
import numpy as np
from pymanip.video.avt import AVT_Camera as Camera
async def some_monitoring():
# ... do some monitoring of the experiment conditions ...
async def video_recording():
acquisition_name = "essai_1"
nframes = 3000
with Camera() as cam:
cam.set_trigger_mode(True)
count, dt = await cam.acquire_to_files_async(
nframes,
f"{acquisition_name:}/img",
dryrun=False,
file_format="png",
compression_level=9,
delay_save=True,
)
dt_avg = np.mean(t[1:] - t[:-1])
print("Average:", 1.0 / dt_avg, "fps")
async def main():
await asyncio.gather(some_monitoring(),
video_recording(),
)
asyncio.run(main())
Multi-camera acquisition
One application of this simple method is to extent to simultaneous acquisition on several cameras (possibly of different brands). To ensure simultaneous frame grabbing, it is necessary to use an external function generator for external triggering. In the following example, we use an Agilent 33220a function generator which we configure for a burst with a software trigger. In our case, we use USB to communicate with the generator. Once the two cameras are ready for frame grabbing, the software trigger is sent, and the frames from both cameras are acquired.
Some manual tickering may be necessary in real cases. For example, in our example we use two Firewire AVT cameras connected on the same FireWire board. So we must adjust the packet size.
To know when both cameras are ready to grab frames, we use the
initialising_cams
parameter. Each camera object removes itself from
this set when it is ready to grab frames. All we need is therefore to implement
a task which will send the software trigger to the function generator, once the
set is empty.
Because the function generator is programmed for a burst of N peaks, the cameras will only be triggered N times. Therefore, it is a way to make sure that the N obtained frames were indeed simultaneous. If one camera skips one frame, the total number of frame will no longer be N.
The code is as follow:
import asyncio
from datetime import datetime
from pymanip.instruments import Agilent33220a
from pymanip.video.avt import AVT_Camera
basename = "multi"
fps = 10.0
nframes = 100
with Agilent33220a("USB0::2391::1031::MY44052515::INSTR") as gbf:
gbf.configure_burst(fps, nframes)
async def start_clock(cams):
"""This asynchronous function sends the software trigger to
the gbf when all cams are ready.
:param cams: initialising cams
:type cams: set
:return: timestamp of the time when the software trigger was sent
:rtype: float
"""
while len(cams) > 0:
await asyncio.sleep(1e-3)
gbf.trigger()
return datetime.now().timestamp()
with AVT_Camera(0) as cam0, \
AVT_Camera(1) as cam1:
cam0.set_trigger_mode(True) # external trigger
cam1.set_trigger_mode(True)
cam0.set_exposure_time(10e-3)
cam1.set_exposure_time(10e-3)
cam0.camera.IIDCPacketSizeAuto = "Off"
cam0.camera.IIDCPacketSize = 5720
cam1.camera.IIDCPacketSizeAuto = "Off"
cam1.camera.IIDCPacketSize = 8192 // 2
initialing_cams = {cam0, cam1}
task0 = cam0.acquire_to_files_async(
nframes,
basename + "/cam0",
zerofill=4,
file_format="png",
delay_save=True,
progressbar=True,
initialising_cams=initialing_cams,
)
task1 = cam1.acquire_to_files_async(
nframes,
basename + "/cam1",
zerofill=4,
file_format="png",
delay_save=True,
progressbar=False,
initialising_cams=initialing_cams,
)
task2 = start_clock(initialing_cams)
tasks = asyncio.gather(task0, task1, task2)
loop = asyncio.get_event_loop()
(countA, dtA), (countB, dtB), t0 = loop.run_until_complete(tasks)
Note that we use the progressbar
parameter to avoid printing two progress bars. The acquire_to_files_async()
methods are passed the number of expected frames. If one frame is skipped, a Timeout exception will be raised.
Advanced usage
In this section, we illustrate a more advanced usage from one of our own use case. We need simultaneous acquisition on two cameras. The framerate is too fast to wait for each frame to be saved before grabbing the next one. But we don’t want to delay until the end of the acquisition (which might still be long) to start saving, because we don’t want to loose all the data in case something bad happens, and we wish to be able to have a look at the picture before the acquisition ends.
So, in this example, we implement simple queues in which the frames
are stored, and there is a fourth task which gets the frames from this queue and saves them,
(at a lower rate than the acquisition rate). When the acquisition is stopped,
this last task finishes the saving.
In addition, we want to save acquisition parameters with an
AsyncSession
object.
To summarize the four tasks:
task0: acquisition on cam 0
task1: acquisition on cam 1
task2: software trigger when cams are ready
task3: background saving of images
The first 3 tasks are similar to those in the previous section. We use
Python standard library SimpleQueue
class to implement the frame
queue.
We comment the various parts of this script in the following subsections.
Preambule
First the import statements, and definition of some global parameters (video parameter, as well as names for output files).
from queue import SimpleQueue
import asyncio
import os
import cv2
from datetime import datetime
from pymanip.instruments import Agilent33220a
from pymanip.video.avt import AVT_Camera
from pymanip.asyncsession import AsyncSession
from progressbar import ProgressBar
# User inputs
compression_level = 3
exposure_time = 10e-3
cam_type = "avt"
fps = 2
num_imgs = 10
# Paths to save data
current_date = datetime.today().strftime("%Y%m%d")
current_dir = os.path.dirname(os.path.realpath(__file__))
saving_dir_date = f"{current_dir}\\data\\{current_date}\\"
if not os.path.isdir(saving_dir_date):
os.makedirs(saving_dir_date)
num_dir = len(os.listdir(saving_dir_date))
saving_dir_run = f"{saving_dir_date}run{num_dir+1:02.0f}\\"
if not os.path.isdir(saving_dir_run):
os.makedirs(saving_dir_run)
basename = f"{saving_dir_run}session"
Acquisition task (task 0 and task 1)
This task is responsible for grabbing frames for the camera, and putting
it in the queue. Note that we must copy the image because the numpy
array yielded by the acquisition_async()
generator uses shared memory (and would no longer hold this particular frame
on subsequent iterations of the generator). The queues are created in the main
function, and are called im_buffer0
for camera 0, and im_buffer1
for camera 1.
In addition, we want to be able to abort the script. We use the mechanisms
defined in the pymanip.asyncsession.AsyncSession
class which set ups
signal handling for interrupt signal. It basically defines a running
attribute that is set to False
when the program should stop. The
acquisition task must check this variable to cleanup stop grabbing the
frames if the user has sent the interrupt signal.
async def acquire_images(sesn, cam, num_cam, initialing_cams):
global num_imgs
kk = 0
bar = ProgressBar(min_value=0, max_value=num_imgs, initial_value=kk)
gen = cam.acquisition_async(num_imgs, initialising_cams=initialing_cams)
async for im in gen:
if num_cam == 0:
sesn.im_buffer0.put(im.copy())
elif num_cam == 1:
sesn.im_buffer1.put(im.copy())
kk += 1
bar.update(kk)
if not sesn.running:
num_imgs = kk
success = await gen.asend(True)
if not success:
print("Unable to stop camera acquisition")
break
bar.finish()
print(f"Camera acquisition stopped ({kk:d} images recorded).")
sesn.running = False
Software trigger task (task 2)
This task monitor the set of initialising cams, which gets empty when all the cameras are ready to grab frames. Then, it triggers the generator function.
async def start_clock(cams):
# Start clocks once all camera are done initializing
while len(cams) > 0:
await asyncio.sleep(1e-3)
gbf.trigger()
return datetime.now().timestamp()
Background saving of images (task 3)
This task looks for images in the im_buffer0
and im_buffer1
queues, as long as
the acquisition is still running or that the queues are not empty.
The images are saved using OpenCV imwrite()
function that we run in an executor (i.e. in a
separate thread), so as not to block the acquisition tasks.
async def save_images(sesn):
params = (cv2.IMWRITE_PNG_COMPRESSION, compression_level)
loop = asyncio.get_event_loop()
i = 0
bar = None
while sesn.running or not sesn.im_buffer0.empty() or not sesn.im_buffer1.empty():
if sesn.im_buffer0.empty() and sesn.im_buffer1.empty():
await asyncio.sleep(1.0)
else:
if not im_buffer0.empty():
im0 = sesn.im_buffer0.get()
filename0 = f"{saving_dir_run}\\cam0_{i:04d}.png"
await loop.run_in_executor(None, cv2.imwrite, filename0, im0, params)
i += 1
if not im_buffer1.empty():
im1 = sesn.im_buffer1.get()
filename1 = f"{saving_dir_run}\\cam1_{i:04d}.png"
await loop.run_in_executor(None, cv2.imwrite, filename1, im1, params)
if not sesn.running:
if bar is None:
print("Saving is terminating...")
bar = ProgressBar(
min_value=0, max_value=2 * num_imgs, initial_value=2 * i
)
else:
bar.update(2 * i)
if bar is not None:
bar.finish()
print(f"{2*i:d} images saved.")
One important point is that this task is the only task which access disk storage. The acquisition tasks work solely on memory, so they are not slowed down by the saving task.
Main function and setup
The main function sets up the function generator and the cameras, and start the tasks. It must also create the queues.
async def main():
with AsyncSession(basename) as sesn, \
Agilent33220a("USB0::2391::1031::MY44052515::INSTR") as sesn.gbf:
# Configure function generator
sesn.gbf.configure_burst(fps, num_imgs)
sesn.save_parameter(fps=fps, num_imgs=num_imgs)
# Prepare buffer queues
sesn.im_buffer0 = SimpleQueue()
sesn.im_buffer1 = SimpleQueue()
# Prepare camera and start tasks
with AVT_Camera(0) as cam0, \
AVT_Camera(1) as cam1:
# External trigger and camera properties
cam0.set_trigger_mode(True)
cam1.set_trigger_mode(True)
cam0.set_exposure_time(exposure_time)
cam1.set_exposure_time(exposure_time)
sesn.save_parameter(exposure_time=exposure_time)
cam0.camera.IIDCPacketSizeAuto = "Off"
cam0.camera.IIDCPacketSize = 5720
cam1.camera.IIDCPacketSizeAuto = "Off"
cam1.camera.IIDCPacketSize = 8192 // 2
# Set up tasks
initialing_cams = {cam0, cam1}
task0 = acquire_images(sesn, cam0, 0, initialing_cams)
task1 = acquire_images(sesn, cam1, 1, initialing_cams)
task2 = start_clock(initialing_cams)
task3 = save_images(sesn)
# We use AsyncSession monitor co-routine which set ups the signal
# handling. We don't need remote access, so server_port=None.
# Alternative:
# await asyncio.gather(task0, task1, task2, task3)
await sesn.monitor(task0, task1, task2, task3,
server_port=None)
asyncio.run(main())
Asynchronous acquisition session (pymanip.video.session
)
This module provides a high-level class, VideoSession
, to be used in acquisition scripts.
Users should subclass VideoSession
, and add two methods: prepare_camera(self, cam)
in which
additionnal camera setup may be done, and process_image(self, img)
in which possible image post-processing
can be done.
The prepare_camera
method, if defined, is called before starting camera acquisition.
The process_image
method, if defined, is called before saving the image.
The implementation allows simultaneous acquisition of several cameras, trigged by a function generator.
Concurrent tasks are set up to grab images from the cameras to RAM memory, while a background task saves
the images to the disk. The cameras, and the trigger, are released as soon as acquisition is finished, even if the images are
still being written to the disk, which allows several scripts to be executed concurrently (if the host computer
has enough RAM). The trigger_gbf
object must implement configure_square()
and configure_burst()
methods
to configure square waves and bursts, as well as trigger()
method for software trigger. An exemple is
fluidlab.instruments.funcgen.agilent_33220a.Agilent33220a
.
A context manager must be used to ensure proper saving of the metadata to the database.
Example
import numpy as np
import cv2
from pymanip.video.session import VideoSession
from pymanip.video.ximea import Ximea_Camera
from pymanip.instruments import Agilent33220a
class TLCSession(VideoSession):
def prepare_camera(self, cam):
# Set up camera (keep values as custom instance attributes)
self.exposure_time = 50e-3
self.decimation_factor = 2
cam.set_exposure_time(self.exposure_time)
cam.set_auto_white_balance(False)
cam.set_limit_bandwidth(False)
cam.set_vertical_skipping(self.decimation_factor)
cam.set_roi(1298, 1833, 2961, 2304)
# Save some metadata to the AsyncSession underlying database
self.save_parameter(
exposure_time=self.exposure_time,
decimation_factor=self.decimation_factor,
)
def process_image(self, img):
# On decimate manuellement dans la direction horizontale
img = img[:, ::self.decimation_factor, :]
# On redivise encore tout par 2
# image_size = (img.shape[0]//2, img.shape[1]//2)
# img = cv2.resize(img, image_size)
# Correction (fixe) de la balance des blancs
kR = 1.75
kG = 1.0
kB = 2.25
b, g, r = cv2.split(img)
img = np.array(
cv2.merge([kB*b, kG*g, kR*r]),
dtype=img.dtype,
)
# Rotation de 180°
img = cv2.rotate(img, cv2.ROTATE_180)
return img
with TLCSession(
Ximea_Camera(),
trigger_gbf=Agilent33220a("USB0::0x0957::0x0407::SG43000299::INSTR"),
framerate=24,
nframes=1000,
output_format="png",
) as sesn:
# ROI helper (remove cam.set_roi in prepare_camera for correct usage)
# sesn.roi_finder()
# Single picture test
# sesn.show_one_image()
# Live preview
# ts, count = sesn.live()
# Run actual acquisition
ts, count = sesn.run(additionnal_trig=1)
- class pymanip.video.session.VideoSession(camera_or_camera_list, trigger_gbf, framerate, nframes, output_format, output_format_params=None, output_path=None, exist_ok=False, timeout=None, burst_mode=True)[source]
Bases:
AsyncSession
This class represents a video acquisition session.
- Parameters
camera_or_camera_list (
Camera
or list ofCamera
) – Camera(s) to be acquiredtrigger_gbf (
Driver
) – function generator to be used as triggerframerate (float) – desired framerate
nframes (int) – desired number of frames
output_format (str) – desired output image format, “bmp”, “png”, tif”, or video format “mp4”
output_format_params (list) – additionnal params to be passed to
cv2.imwrite()
exist_ok (bool) – allows to override existing output folder
- async _acquire_images(cam_no, live=False)[source]
Private instance method: image acquisition task. This task asynchronously iterates over the given camera frames, and puts the obtained images in a simple FIFO queue.
- _convert_for_ffmpeg(cam_no, img, fmin, fmax, gain)[source]
Private instance method: image conversion for ffmpeg process. This method prepares the input image to bytes to be sent to the ffmpeg pipe.
- async _fast_acquisition_to_ram(cam_no, total_timeout_s)[source]
Private instance method: fast acquisition to ram task
- async _live_preview(unprocessed=False)[source]
Private instance method: live preview task. This task checks the FIFO queue, drains it and shows the last frame of each camera using cv2. If more than one frame is in the queues, the older frames are dropped.
- Parameters
unprocessed (bool) – do not call
process_image()
method.
- async _save_images(keep_in_RAM=False, unprocessed=False, no_save=False)[source]
Private instance method: image saving task. This task checks the image FIFO queue. If an image is available, it is taken out of the queue and saved to the disk.
- async _save_video(cam_no, gain=1.0, unprocessed=False)[source]
Private instance method: video saving task. This task waits for images in the FIFO queue, and sends them to ffmpeg via a pipe.
- async _start_clock()[source]
Private instance method: clock starting task. This task waits for all the cameras to be ready for trigger, and then sends a software trig to the function generator.
- get_one_image(additionnal_trig=0, unprocessed=False, unpack_solo_cam=True)[source]
Get one image from the camera(s).
- Parameters
- Returns
image(s) from the camera(s)
- Return type
numpy.ndarray
or list ofnumpy.ndarray
(if multiple cameras, orunpack_solo_cam=False
).
- async main(keep_in_RAM=False, additionnal_trig=0, live=False, unprocessed=False, delay_save=False, no_save=False)[source]
Main entry point for acquisition tasks. This asynchronous task can be called with
asyncio.run()
, or combined with other user-defined tasks.- Parameters
- Returns
camera_timestamps, camera_counter
- Return type
numpy.ndarray
,numpy.ndarray
- roi_finder(additionnal_trig=0)[source]
Helper to determine the ROI. This method grabs one unprocessed image from the camera(s), and allows interactive selection of the region of interest. Attention: it is assumed that the
prepare_camera()
method did not already set the ROI.
Implementation
Video acquisition module (pymanip.video
)
This module defines the Camera
abstract base class,
which implements common methods such as the live video preview, and higher
level simple methods to quickly set up a video recording. It also
defines common useful functions, and a simple extension of Numpy arrays to
hold metadata (such as frame timestamp).
- class pymanip.video.Camera[source]
This class is the abstract base class for all other concrete camera classes. The concrete sub-classes must implement the following methods:
acquisition_oneshot()
methodacquisition()
andacquisition_async()
generator methodsresolution
,name
andbitdepth
properties
The concrete sub-classes will also probably have to override the constructor method, and the enter/exit context manager method, as well as common property getters and setters:
set_exposure_time()
set_trigger_mode()
set_roi()
set_frame_rate()
It may also define specialized getters for the camera which support them:
set_adc_operating_mode()
: ADC operating modeset_pixel_rate()
: pixel rate sensor readout (in Hz)set_delay_exposuretime()
- acquire_signalHandler(*args, **kwargs)[source]
This method sends a stop signal to the
acquire_to_files_async()
method.
- acquire_to_files(*args, **kwargs)[source]
This method starts the camera, acquires images and saves them to the disk. It is a simple wrapper around the
pymanip.video.Camera.acquire_to_files_async()
asynchronous method. The parameters are identical.
- async acquire_to_files_async(num, basename, zerofill=4, dryrun=False, file_format='png', compression=None, compression_level=3, verbose=True, delay_save=False, progressbar=True, initialising_cams=None, **kwargs)[source]
This asynchronous method starts the camera, acquires
num
images and saves them to the disk. It is a simple quick way to perform camera acquisition (one-liner in the user code).- Parameters
num (int) – number of frames to acquire
basename (str) – basename for image filenames to be saved on disk
zerofill (int, optional) – number of digits for the framenumber for image filename, defaults to 4
dryrun (bool, optional) – do the acquisition, but saves nothing (testing purposes), defaults to False
file_format (str, optional) – format for the image files, defaults to “png”. Possible values are “raw”, “npy”, “npy.gz”, “hdf5”, “png” or any other extension supported by OpenCV imwrite.
compression (str, optional) – compression option for HDF5 format (“gzip”, “lzf”), defaults to None.
compression_level (int, optional) – png compression level for PNG format, defaults to 3.
verbose (bool, optional) – prints information message, defaults to True.
delay_save (bool, optional) – records all the frame in RAM, and saves at the end. This is useful for fast framerates when saving time is too slow. Defaults to False.
progressbar (bool, optional) – use
progressbar
module to show a progress bar. Defaults to True.initialising_cams (set, optional) – None, or set of camera objects. This camera object will remove itself from this set, once it is ready to grab frames. Useful in the case of multi camera acquisitions, to determine when all cameras are ready to grab frames. Defaults to None.
- Returns
image_counter, frame_datetime
- Return type
The details of the file format are given in this table:
file_format
description
raw
native 16 bits integers, i.e. li16 (little-endian) on Intel CPUs
npy
numpy npy file (warning: depends on pickle format)
npy.gz
gzip compressed numpy file
hdf5
hdf5, with optional compression
png, jpg, tif
image format with opencv imwrite with optional compression level for PNG
Typical usage of the function for one camera:
async def main(): with Camera() as cam: counts, times = await cam.acquire_to_files_async(num=20, basename='img-') asyncio.run(main())
- acquisition(num=inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True)[source]
This generator method is the main method that sub-classes must implement, along with the asynchronous variant. It is used by all the other higher-level methods, and can also be used directly in user code.
- Parameters
num (int, or float("inf"), optional) – number of frames to acquire, defaults to float(“inf”).
timout – timeout for frame acquisition (in milliseconds)
raw (bool, optional) – if True, returns bytes from the camera without any conversion. Defaults to False.
initialising_cams (set, optional) – None, or set of camera objects. This camera object will remove itself from this set, once it is ready to grab frames. Useful in the case of multi camera acquisitions, to determine when all cameras are ready to grab frames. Defaults to None.
raise_on_timeout (bool, optional) – boolean indicating whether to actually raise an exception when timeout occurs
It starts the camera, yields
num
images, and closes the camera. It can be aborted when sent a true-truth value object. It then cleanly stops the camera and finally yields True as a confirmation that the stop_signal has been caught before returning. Sub-classes must therefore reads the possible stop_signal when yielding the frame, and act accordingly.The
MetadataArray
objects yielded by this generator use a shared memory buffer which may be overriden for the next frame, and which is no longer defined when the generator object is cleaned up. The users are responsible for copying the array, if they want a persistant copy.User-level code will use the generator in this manner:
gen = cam.acquire() for frame in gen: # .. do something with frame .. if I_want_to_stop: clean = gen.send(True) if not clean: print('Warning generator not cleaned') # no need to break here because the gen will be automatically exhausted
- async acquisition_async(num=inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True)[source]
This asynchronous generator method is similar to the
acquisition()
generator method, except asynchronous. So much so, that in the general case, the latter can be defined simply by yielding from this asynchronous generator (so that the code is written once for both use cases), i.e.from pymanip.asynctools import synchronize_generator def acquisition( self, num=np.inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True, ): yield from synchronize_generator( self.acquisition_async, num, timeout, raw, initialising_cams, raise_on_timeout, )
It starts the camera, yields
num
images, and closes the camera. It can stop yielding images by sending the generator object a true-truth value object. It then cleanly stops the camera and finally yields True as a confirmation that the stop_signal has been caught before returning. Sub-classes must therefore reads the possible stop_signal when yielding the frame, and act accordingly.The
MetadataArray
objects yielded by this generator use a shared memory buffer which may be overriden for the next frame, and which is no longer defined when the generator object is cleaned up. The users are responsible for copying the array, if they want a persistant copy.The user API is similar, except with asynchronous calls, i.e.
gen = cam.acquire_async() async for frame in gen: # .. do something with frame .. if I_want_to_stop: clean = await gen.asend(True) if not clean: print('Warning generator not cleaned') # no need to break here because the gen will be automatically exhausted
- acquisition_oneshot()[source]
This method must be implemented in the sub-classes. It starts the camera, grab one frame, stops the camera, and returns the frame. It is useful for testing purposes, or in cases where only one frame is desired between very long time delays. It takes no input parameters. Returns an “autonomous” array (the buffer is independant of the camera object).
- Returns
frame
- Return type
- display_crosshair()[source]
This method adds a centered crosshair for self-reflection to the live-preview window (qt backend only)
- preview(backend='cv', slice_=None, zoom=0.5, rotate=0)[source]
This methods starts and synchronously runs the live-preview GUI.
- Parameters
backend (str) – GUI library to use. Possible values: “cv” for OpenCV GUI, “qt” for PyQtGraph GUI.
slice (Iterable[int], optional) – coordinate of the region of interest to show, defaults to None
zoom (float, optional) – zoom factor, defaults to 0.5
rotate (float, optional) – image rotation angle, defaults to 0
- async preview_async_cv(slice_, zoom, name, rotate=0)[source]
This method starts and asynchronously runs the live-preview with OpenCV GUI. The params are identical to the
preview()
method.
- preview_cv(slice_, zoom, rotate=0)[source]
This method starts and synchronously runs the live-preview with OpenCV GUI. It is a wrapper around the
pymanip.video.Camera.preview_async_cv()
method. The params are identical to thepreview()
method.
- class pymanip.video.MetadataArray(input_array, metadata=None)[source]
Bases:
ndarray
This class extends Numpy array to allow for an additionnal metadata attribute.
- metadata
dictionnary attribute containing user-defined key-value pairs
- pymanip.video.save_image(im, ii, basename, zerofill, file_format, compression, compression_level, color_order=None)[source]
This function is a simple general function to save an input image from the camera to disk.
- Parameters
im (
MetadataArray
) – input imageii (int) – frame number
basename (str) – file basename
zerofill (int) – number of digits for the frame number
file_format (str) – image file format on disk. Possible values are: “raw”, “npy”, “npy.gz”, “hdf5”, “png”, or a file extension that OpenCV imwrite supports
compression (str) – the compression argument “gzip” or “lzf” to pass to
h5py.create_dataset()
if file_format is “hdf5”compression_level (int) – the png compression level passed to opencv for the “png” file format
Concrete implementation for Andor camera
Andor module (pymanip.video.andor
)
This module is a shortcut for the pymanip.video.andor.camera.Andor_Camera
class.
Andor Camera module (pymanip.video.andor.camera
)
This module implement the Andor_Camera
class, as a subclass
of the pymanip.video.Camera
base class. It uses the
third-party pyAndorNeo
module.
- class pymanip.video.andor.camera.Andor_Camera(camNum=0)[source]
Concrete
pymanip.video.Camera
class for Andor camera.- Parameters
camNum (int, optional) – camera number, defaults to 0.
- async acquisition_async(num=inf, timeout=None, raw=False, initialising_cams=None, raise_on_timeout=True)[source]
Concrete implementation of
pymanip.video.Camera.acquisition_async()
for the Andor camera.
- acquisition_oneshot(timeout=1.0)[source]
Concrete implementation of
pymanip.video.Camera.acquisition_oneshot()
for the Andor camera.
Andor reader module (pymanip.video.andor.reader
)
This module implements simple pure-python reader for Andor DAT and SIF files.
- class pymanip.video.andor.reader.AndorAcquisitionReader(acquisition_folder)[source]
This class is a simple pure-python reader for Andor DAT spool files in a directory.
- Parameters
acquisition_folder (str) – the folder in which to read the DAT spool files
Concrete implementation for AVT camera
AVT Camera module (pymanip.video.avt
)
This module implements the pymanip.video.avt.AVT_Camera
class
using the third-party pymba
module.
- class pymanip.video.avt.AVT_Camera(cam_num=0, pixelFormat='Mono16')[source]
Bases:
Camera
Concrete
pymanip.video.Camera
class for AVT camera.- Parameters
- acquisition(num=inf, timeout=1000, raw=False, framerate=None, external_trigger=False, initialising_cams=None, raise_on_timeout=True)[source]
Concrete implementation of
pymanip.video.Camera.acquisition()
for the AVT camera.
- async acquisition_async(num=inf, timeout=1000, raw=False, framerate=None, external_trigger=False, initialising_cams=None, raise_on_timeout=True)[source]
Concrete implementation of
pymanip.video.Camera.acquisition_async()
for the AVT camera.
- acquisition_oneshot()[source]
Concrete implementation of
pymanip.video.Camera.acquisition_oneshot()
for the AVT camera.
- camera_feature_info(featureName)[source]
This method queries the camera for the specified feature.
- Parameters
featureName (str) – one of the features returned by
camera_features()
- Returns
values associated with the specified feature
- Return type
- camera_features()[source]
This methods returns the list of possible camera features.
- Returns
camera feature
- Return type
- classmethod get_camera_list()[source]
This methods returns the list of camera numbers connected to the computer.
- Returns
camera ids
- Return type
- set_exposure_time(seconds)[source]
This method sets the exposure time for the camera.
- Parameters
seconds (float) – exposure in seconds. Possible values range from 33.0 µs to 67108895.0 µs.
Concrete implementation for PCO camera
PCO module (pymanip.video.pco
)
This module is a shortcut for the pymanip.video.pco.camera.PCO_Camera
class. It also defines utility functions for PCO camera.
PixelFly library bindings (pymanip.video.pco.pixelfly
)
This module implements bindings to the PCO PixelFly library using ctypes
.
Please not that these bindings are not official, and that not all PixelFly functions
are wrapped. Please refer to the official PCO PixelFly documentation for accurate description
of the functions.
- class pymanip.video.pco.pixelfly.PCO_Image[source]
This class is a binding to the PCO_Image C Structure.
- pymanip.video.pco.pixelfly.PCO_manage_error(ret_code)[source]
This function raises an error exception or a runtime warning if ret_code is non-zero.
- Parameters
ret_code (int) – PCO library function return code
- pymanip.video.pco.pixelfly.bcd_byte_to_str(input_)[source]
This function converts a one-byte bcd value into two digit string.
- pymanip.video.pco.pixelfly.bcd_to_int(input_, endianess='little')[source]
This function converts decimal-coded value (bcd) into int.
Decimal-encoded value format is given in this table:
Decimal digit
Bits
0
0000
1
0001
2
0010
3
0011
4
0100
5
0101
6
0110
7
0111
8
1000
9
1001
- pymanip.video.pco.pixelfly.PCO_OpenCamera()[source]
This function opens a camera device and attach it to a handle, which will be returned by the parameter ph. This function scans for the next available camera. If you want to access a distinct camera please use PCO_OpenCameraEx. Due to historical reasons the wCamNum parameter is a don’t care.
- pymanip.video.pco.pixelfly.PCO_OpenCameraEx(interface_type, camera_number)[source]
This function opens a distinct camera, e.g. a camera which is connected to a specific interface port.
- Parameters
The interface_type values are given in this table:
Interface
interface_type
FireWire
1
Camera Link Matrox
2
Camera Link Silicon Software mE III
3
Camera Link National Instruments
4
GigE
5
USB 2.0
6
Camera Link Silicon Software mE IV
7
USB 3.0
8
Reserved for WLAN
9
Camera Link serial port only
10
Camera Link HS
11
all
0xFFFF
- pymanip.video.pco.pixelfly.PCO_CloseCamera(handle)[source]
This function closes a camera device.
- Parameters
handle (HANDLE) – handle of the camera
- pymanip.video.pco.pixelfly.PCO_GetInfoString(handle)[source]
This function reads information about the camera, e.g. firmware versions.
- Parameters
handle (HANDLE) – camera handle
- pymanip.video.pco.pixelfly.PCO_GetROI(handle: int) Tuple[int, int, int, int] [source]
This function returns the current ROI (region of interest) setting in pixels. (X0,Y0) is the upper left corner and (X1,Y1) the lower right one.
- pymanip.video.pco.pixelfly.PCO_SetROI(handle: int, RoiX0: c_ushort, RoiY0: c_ushort, RoiX1: c_ushort, RoiY1: c_ushort)[source]
This function does set a ROI (region of interest) area on the sensor of the camera.
- pymanip.video.pco.pixelfly.PCO_GetFrameRate(handle)[source]
This function returns the current frame rate and exposure time settings of the camera.Returned values are only valid if last timing command was PCO_SetFrameRate.
- pymanip.video.pco.pixelfly.PCO_SetFrameRate(handle: int, FrameRateMode: c_ushort, FrameRate: c_ulong, FrameRateExposure: c_ulong)[source]
This function sets Frame rate (mHz) and exposure time (ns) Frame rate status gives the limiting factors if the condition are not met.
- pymanip.video.pco.pixelfly.PCO_GetCameraName(handle)[source]
This function retrieves the name of the camera.
- pymanip.video.pco.pixelfly.PCO_GetGeneral(handle)[source]
This function requests all info contained in the following descriptions, especially:
camera type, hardware/firmware version, serial number, etc.
Request the current camera and power supply temperatures
- pymanip.video.pco.pixelfly.PCO_GetSensorStruct(handle)[source]
Get the complete set of the sensor functions settings
- pymanip.video.pco.pixelfly.PCO_GetCameraDescription(handle)[source]
Sensor and camera specific description is queried. In the returned PCO_Description structure margins for all sensor related settings and bitfields for available options of the camera are given.
- pymanip.video.pco.pixelfly.PCO_GetCameraHealthStatus(handle)[source]
This function retrieves information about the current camera status.
- pymanip.video.pco.pixelfly.PCO_GetRecordingStruct(handle)[source]
Get the complete set of the recording function settings. Please fill in all wSize parameters, even in embedded structures.
- pymanip.video.pco.pixelfly.PCO_GetSizes(handle: int) Tuple[int, int, int, int] [source]
This function returns the current armed image size of the camera.
- pymanip.video.pco.pixelfly.PCO_AllocateBuffer(handle: int, bufNr: int, size: int, bufPtr: LP_c_ushort, hEvent: c_void_p = 0) Tuple[int, LP_c_ushort, int] [source]
This function sets up a buffer context to receive the transferred images. A buffer index is returned, which must be used for the image transfer functions. There is a maximum of 16 buffers per camera.
Attention: This function cannot be used, if the connection to the camera is established through the serial connection of a Camera Link grabber. In this case, the SDK of the grabber must be used to do any buffer management.
- pymanip.video.pco.pixelfly.PCO_FreeBuffer(handle, bufNr)[source]
This function frees a previously allocated buffer context with a given index. If internal memory was allocated for this buffer context, it will be freed. If an internal event handle was created, it will be closed.
- pymanip.video.pco.pixelfly.PCO_GetBufferStatus(handle, sBufNr)[source]
This function queries the status of the buffer context with the given index:
- StatusDll describes the state of the buffer context:
StatusDll
description
0x80000000
Buffer is allocated
0x40000000
Buffer event created inside the SDK DLL
0x20000000
Buffer is allocated externally
0x00008000
Buffer event is set
- StatusDrv describes the state of the last image transfer into this buffer.
PCO_NOERROR: Image transfer succeeded
others: See error codes
- pymanip.video.pco.pixelfly.PCO_ArmCamera(handle)[source]
Arms, i.e. prepares the camera for a consecutive set recording status = [run] command. All configurations and settings made up to this moment are accepted and the internal settings of the camera is prepared. Thus the camera is able to start immediately when the set recording status = [run] command is performed.
- pymanip.video.pco.pixelfly.PCO_GetRecordingState(handle)[source]
Returns the current Recording state of the camera: - 0x0000: camera is stopped, recording state [stop]
0x0001: camera is running, recording state [run]
- pymanip.video.pco.pixelfly.PCO_SetRecordingState(handle, state)[source]
Sets the current recording status and waits till the status is valid. If the state can’t be set the function will return an error.
Note
it is necessary to arm camera before every set recording status command in order to ensure that all settings are accepted correctly.
During the recording session, it is possible to change the timing by calling
PCO_SetDelayExposureTime()
.
- pymanip.video.pco.pixelfly.PCO_GetBitAlignment(handle)[source]
This function returns the current bit alignment of the transferred image data. The data can be either MSB (Big Endian) or LSB (Little Endian) aligned. Returns: - 0x0000 [MSB]
0x0001 [LSB]
- pymanip.video.pco.pixelfly.PCO_SetBitAlignment(handle, littleEndian)[source]
This functions sets the bit alignment of the transferred image data. littleEndian can be 0 or 1.
- pymanip.video.pco.pixelfly.PCO_GetImageStruct(handle)[source]
Information about previously recorded images is queried from the camera and the variables of the PCO_Image structure are filled with this information.
- pymanip.video.pco.pixelfly.PCO_GetMetaData(handle, bufNr)[source]
Cameras: pco.dimax and pco.edge
Query additionnal image information, which the camera has attached to the transferred image, if Meta Data mode is enabled.
- pymanip.video.pco.pixelfly.PCO_SetMetaDataMode(handle, MetaDataMode)[source]
Cameras: pco.dimax and pco.edge
Sets the mode for Meta Data and returns information about size and version of the Meta Data block. When Meta Data mode is set to [on], a Meta Data block with additional information is added at the end of each image. The internal buffers allocated with PCO_AllocateBuffer are adapted automatically.
- pymanip.video.pco.pixelfly.PCO_GetMetaDataMode(handle)[source]
Returns the current Meta Data mode of the camera and information about size and version of the Meta Data block.
- pymanip.video.pco.pixelfly.PCO_SetTimestampMode(handle, mode)[source]
Sets the timestamp mode of the camera:
mode
short description
long description
0x0000
[off]
0x0001
[binary]
BCD coded timestamp in the first 14 pixels
0x0002
[binary+ASCII]
BCD coded timestamp in the first 14 pixels + ASCII text
0x0003
[ASCII]
ASCII text only (see camera descriptor for availability)
- pymanip.video.pco.pixelfly.PCO_AddBufferEx(handle, dw1stImage, dwLastImage, sBufNr, wXRes, wYRes, wBitPerPixel)[source]
This function sets up a request for a single transfer from the camera and returns immediately.
- pymanip.video.pco.pixelfly.PCO_CancelImages(handle)[source]
This function does remove all remaining buffers from the internal queue, reset the internal queue and also reset the transfer state machine in the camera. It is mandatory to call PCO_CancelImages after all image transfers are done. This function can be called before or after setting PCO_SetRecordingState to [stop].
- pymanip.video.pco.pixelfly.PCO_SetImageParameters(handle, XRes, YRes, flags)[source]
This function sets the image parameters for internal allocated resources. This function must be called before an image transfer is started. If next image will be transfered from a recording camera, flag IMAGEPARAMETERS_READ_WHILE_RECORDING must be set. If next action is to readout images from the camera internal memory, flag IMAGEPARAMETERS_READ_FROM_SEGMENTS must be set.
- pymanip.video.pco.pixelfly.PCO_GetImageEx(handle, segment, firstImage, lastImage, bufNr, xRes, yRes, bitsPerPixel)[source]
This function can be used to get a single image from the camera. The function does not return until the image is transferred to the buffer or an error occured. The timeout value for the transfer can be set with function PCO_SetTimeouts, the default value is 6 seconds. On return the image stored in the memory area of the buffer, which is addressed through parameter sBufNr.
- pymanip.video.pco.pixelfly.PCO_SetDelayExposureTime(handle, dwDelay, dwExposure, wTimeBaseDelay, wTimeBaseExposure)[source]
This function sets the delay and exposure time and the associated time base values. Restrictions for the parameter values are defined in the PCO_Description structure: - dwMinDelayDESC
dwMaxDelayDESC
dwMinDelayStepDESC
dwMinExposDESC
dwMaxExposDESC
dwMinExposStepDESC
Possible values for wTimeBaseDelay and wTimeBaseExposure:
Value
Unit
0x0000
ns
0x0001
µs
0x0002
ms
- pymanip.video.pco.pixelfly.PCO_GetDelayExposureTime(handle)[source]
Returns the current setting of delay and exposure time
- pymanip.video.pco.pixelfly.PCO_GetTriggerMode(handle)[source]
Returns the current trigger mode setting of the camera
- pymanip.video.pco.pixelfly.PCO_SetTriggerMode(handle, mode)[source]
Sets the trigger mode of the camera.
- pymanip.video.pco.pixelfly.PCO_SetADCOperation(handle, operation)[source]
Sets the ADC (analog-digital-converter) operating mode. If sensor data is read out using single ADC operation, linearity of image data is enhanced. Using dual ADC, operation readout is faster and allows higher frame rates. If dual ADC operating mode is set, horizontal ROI must be adapted to symmetrical values.
Possible values:
Value
Mode
0x0001
[single ADC]
0x0002
[dual ADC]
- pymanip.video.pco.pixelfly.PCO_GetADCOperation(handle)[source]
Returns the ADC operation mode (single / dual)
- pymanip.video.pco.pixelfly.PCO_SetPixelRate(handle, rate)[source]
This functions sets the pixel rate for the sensor readout.
- pymanip.video.pco.pixelfly.PCO_GetPixelRate(handle)[source]
Returns the current pixel rate of the camera in Hz. The pixel rate determines the sensor readout speed.
- pymanip.video.pco.pixelfly.PCO_GetNoiseFilterMode(handle)[source]
This function returns the current operating mode of the image correction in the camera.
The noise filter mode:
Value
Mode
0x0000
[off]
0x0001
[on]
0x0101
[on + hot pixel correction]
- pymanip.video.pco.pixelfly.PCO_SetNoiseFilterMode(handle, mode)[source]
This function does set the image correction operating mode of the camera. Image correction can either be switched to totally off, noise filter only mode or noise filter plus hot pixel correction mode. The command will be rejected, if Recording State is [run], see PCO_GetRecordingState.
The noise filter mode:
Value
Mode
0x0000
[off]
0x0001
[on]
0x0101
[on + hot pixel correction]
- pymanip.video.pco.pixelfly.PCO_TriggerModeDescription
dictionnary of trigger modes
PCO Camera module (pymanip.video.pco.camera
)
This module implement the pymanip.video.pco.PCO_Camera
class using
bindings to the Pixelfly library from pymanip.video.pco.pixelfly
.
- class pymanip.video.pco.camera.PCO_Camera(interface='all', camera_num=0, *, metadata_mode=False, timestamp_mode=True)[source]
Concrete
pymanip.video.Camera
class for PCO camera.- Parameters
interface (str, optional) – interface where to look for the camera, defaults to “all”
camera_num (int, optional) – camera number to look for, defaults to 0.
metadata_mode (bool, optional) – enable PCO Metadata mode, defaults to False.
timestamp_mode (bool, optional) – enable Timestamp mode (supported by all cameras), defaults to True.
- async acquisition_async(num=inf, timeout=None, raw=False, initialising_cams=None, raise_on_timeout=True)[source]
Concrete implementation of
pymanip.video.Camera.acquisition_async()
for the PCO camera.
- acquisition_oneshot()[source]
Concrete implementation of
pymanip.video.Camera.acquisition_oneshot()
for the PCO camera.
- property bitdepth
Camera sensor bit depth
- current_adc_operation()[source]
This method returns the current ADC operation mode.
- Returns
Current ADC operation mode (0x0001 for “single”, 0x0002 for “dual”)
- Return type
- current_delay_exposure_time()[source]
This method returns current delay and exposure time in seconds.
- current_frame_rate()[source]
This method returns the current frame rate.
- Returns
Current frame rate
- Return type
- current_noise_filter_mode()[source]
This methods queries the current noise filter mode.
- Returns
the noise filter mode
- Return type
The noise filter mode:
Value
Mode
0x0000
[off]
0x0001
[on]
0x0101
[on + hot pixel correction]
- current_pixel_rate()[source]
This method returns the current pixel rate.
- Returns
Current pixel rate (e.g. 10 MHz or 40 MHz for the PCO.1600
- Return type
- current_trigger_mode_description()[source]
This method returns the current trigger mode description.
- Returns
description of current trigger mode
- Return type
- health_status()[source]
This method queries the camera for its health status.
- Returns
warn, err, status
- property name
Camera name
- property resolution
Camera maximum resolution
- set_adc_operating_mode(mode)[source]
This function selects single or dual ADC operating mode:
Single mode increases linearity;
Dual mode allows higher frame rates.
- set_delay_exposuretime(delay=None, exposuretime=None)[source]
This method sets both the delay and the exposure time.
- set_frame_rate(Frameratemode, Framerate, Framerateexposure)[source]
This method sets Frame rate (mHz) and exposure time (ns).
- Parameters
- Returns
message, framerate, exposure time
- Return type
The meaning of the framerate mode is given in this table
Framerate mode
Meaning
0x0000
Auto mode (camera decides which parameter will be trimmed)
0x0001
Frame rate has priority, (exposure time will be trimmed)
0x0002
Exposure time has priority, (frame rate will be trimmed)
0x0003
Strict, function shall return with error if values are not possible.
The message value in return gives the limiting factors when the condition are not fulfilled. The meaning is given in this table
Message
Meaning
0x0000
Settings consistent, all conditions met
0x0001
Frame rate trimmed, frame rate was limited by readout time
0x0002
Frame rate trimmed, frame rate was limited by exposure time
0x0004
Exposure time trimmed, exposure time cut to frame time
0x8000
Return values dwFrameRate and dwFrameRateExposure are not yet validated.
In the case where message 0x8000 is returned, the other values returned are simply the parameter values passed to the function.
- set_noise_filter_mode(mode)[source]
This method does set the image correction operating mode of the camera. Image correction can either be switched totally off, noise filter only mode or noise filter plus hot pixel correction mode. The command will be rejected, if Recording State is [run], see PCO_GetRecordingState.
- Parameters
mode (int) – the noise filter mode
The noise filter mode:
Value
Mode
0x0000
[off]
0x0001
[on]
0x0101
[on + hot pixel correction]
- set_pixel_rate(rate)[source]
This function selects the pixel rate for sensor readout.
- Parameters
rate (float) – readout rate (in Hz)
For PCO.1600: 10 Mhz or 40 MHz
- set_roi(roiX0=0, roiY0=0, roiX1=0, roiY1=0)[source]
This method sets the positions of the upper left corner (X0,Y0) and lower right (X1,Y1) corner of the ROI (region of interest) in pixels.
- Parameters
The minimum ROI is \(64\times 16\) pixels, and it is required that \(roiX1 \geq roix0\) and \(roiY1 \geq roiY0\).
- set_trigger_mode(mode)[source]
This method sets the trigger mode for the camera.
- Parameters
mode (str) – one of PCO_TriggerModeDescription
Possible values are:
mode
description
0x0000
auto sequence
0x0001
software trigger
0x0002
external exposure start & software trigger
0x0003
external exposure control
0x0004
external synchronized
0x0005
fast external exposure control
0x0006
external CDS control
0x0007
slow external exposure control
0x0102
external synchronized HDSDI
- pymanip.video.pco.camera.PCO_get_binary_timestamp(image)[source]
This functions reads the BCD coded timestamp in the first 14 pixels of an image from a PCO camera.
- Parameters
image (array) – the PCO camera image buffer
- Returns
counter, timestamp
- Return type
int, datetime
We assume the following format (per PCO documentation):
Pixel
Description
Range
Pixel 1
Image counter (MSB)
00..99
Pixel 2
Image counter
00..99
Pixel 3
Image counter
00..99
Pixel 4
Image counter (LSB)
00..99
Pixel 5
Year (MSB)
20
Pixel 6
Year (LSB)
03..99
Pixel 7
Month
01..12
Pixel 8
Day
01..31
Pixel 9
Hour
00..23
Pixel 10
Minutes
00..59
Pixel 11
Seconds
00..59
Pixel 12
µs * 10000
00..99
Pixel 13
µs * 100
00..99
Pixel 14
µs
00..90
- class pymanip.video.pco.camera.PCO_Buffer(cam_handle, XResAct, YResAct)[source]
This class represents an allocated buffer for the PCO camera. It implements context manager, as well as utility function to convert to bytes and numpy array. The buffer is allocated in the constructor method, and freed either by the context manager exit method, or manually calling the
free()
method.- Parameters
- as_array()[source]
This methods returns the buffer as a numpy array. No data is copied, the memory is still bound to this buffer. The user must copy the data if necessary.
- Returns
image array
- Return type
numpy.ndarray
Concrete implementation for IDS camera
IDS Camera module (pymanip.video.ids
)
This module implements the pymanip.video.ids.IDS_Camera
class
using the third-party pyueye
module.
- class pymanip.video.ids.IDS_Camera(cam_num=0)[source]
Bases:
Camera
Concrete implementation for IDS Camera.
- async acquisition_async(num=inf, timeout=1000, raw=False, initialising_cams=None, raise_on_timeout=True)[source]
Concrete implementation
- acquisition_oneshot(timeout_ms=1000)[source]
This method must be implemented in the sub-classes. It starts the camera, grab one frame, stops the camera, and returns the frame. It is useful for testing purposes, or in cases where only one frame is desired between very long time delays. It takes no input parameters. Returns an “autonomous” array (the buffer is independant of the camera object).
- Returns
frame
- Return type
Concrete implementation for Ximea camera
Xiseq file parser (pymanip.video.ximea.xiseq
)
Ximea Camera module (pymanip.video.ximea.camera
)
This module implements the pymanip.video.ximea.Ximea_Camera
using the
Python module provided by Ximea.
- class pymanip.video.ximea.camera.Ximea_Camera(serial_number=None, pixelFormat=None)[source]
Bases:
Camera
Concrete
pymanip.video.Camera
class for Ximea camera.- async acquisition_async(num=inf, timeout=None, raw=False, initialising_cams=None, raise_on_timeout=True, wait_before_stopping=None)[source]
Concrete implementation of
pymanip.video.Camera.acquisition_async()
for the Ximea camera.timeout in milliseconds.
- Parameters
raw (bool (optional)) – if True, returns the XI_IMG object instead of a numpy array.
wait_before_stopping (async function (optional)) – async function to be awaited before stopping the acquisition_async
- async get_image(loop, image, timeout=5000)[source]
Asynchronous version of xiapi.Camera.get_image method This function awaits for next image to be available in transport buffer. Color images are in BGR order (similar to OpenCV default). Attention: matplotlib expects RGB order.
- Parameters
image (
xiapi.Image
) – Image instance to copy image totimeout (int) – timeout in milliseconds
- property name
Camera name
- set_auto_white_balance(toggle)[source]
Enable/Disable auto white balance
- Parameters
toggle (bool) – True if auto white balance
- set_exposure_time(seconds)[source]
This method sets the exposure time for the camera.
- Parameters
seconds (float) – exposure in seconds.
- set_limit_bandwidth(limit)[source]
Enable limit bandwidth (useful if there are several cameras acquiring simultaneously).
- set_roi(roiX0=0, roiY0=0, roiX1=0, roiY1=0)[source]
This method sets the positions of the upper left corner (X0,Y0) and lower right (X1,Y1) corner of the ROI (region of interest) in pixels.
Concrete implementation for Photometrics camera
Photometrics Camera module (pymanip.video.photometrics.camera
)
This module implements the pymanip.video.photometrics.Photometrics_Camera
using the
Python wrapper provided by Photometrics.
The documentation for the PVCAM SDK is available online.
- class pymanip.video.photometrics.camera.Photometrics_Camera(cam_num=0, readout_port=0)[source]
Bases:
Camera
Concrete
pymanip.video.Camera
class for Photometrics camera.- async acquisition_async(num=inf, timeout=None, raw=False, initialising_cams=None, raise_on_timeout=True)[source]
Concrete implementation of
pymanip.video.Camera.acquisition_async()
for the Photometrics camera.timeout in milliseconds.
- async fast_acquisition_to_ram(num, total_timeout_s=300, initialising_cams=None, raise_on_timeout=True)[source]
Fast method (without the overhead of run_in_executor and asynchronous generator), for acquisitions where concurrent saving is not an option (because the framerate is so much faster than writting time), so all frames are saved in RAM anyway.
- set_exposure_time(seconds)[source]
This method sets the exposure time for the camera.
- Parameters
seconds (float) – exposure in seconds.
Acquisition cards
Synchronous acquisition functions
This section describes the synchronous functions implemented in pymanip
for
signal acquisition on DAQmx and Scope boards.
New scripts should preferably use the newer pymanip.aiodaq
instead.
DAQmx acquisition module (pymanip.daq.DAQmx
)
The fluidlab.daq.daqmx
module is a simple functional front-end to
the third-party PyDAQmx
module. It mainly provides two simple one-liner functions:
The pymanip.daq.DAQmx
module is essentially based on its fluidlab counterpart, with
other choices for the default values of arguments, and an additionnal autoset feature for
the read_analog()
function.
It also adds a convenience function for printing the list of DAQmx devices, used by the
pymanip CLI interface.
A discovery function is also added, print_connected_devices()
, based
on the dedicated DAQDevice
class, which is used by the
list_daq sub-command on pymanip command line.
- class pymanip.daq.DAQmx.DAQDevice(device_name)[source]
This class is represents a DAQmx device.
- Parameters
device_name (str) – name of the DAQmx device, e.g. “Dev1”
It mostly implement a number of property getters, which are wrappers to the
PyDAQmx
low-level functions.In addition, it has a static method,
list_connected_devices()
to discover currently connected devices.- property ai_chans
List of the analog input channels on the device
- property ao_chans
List of the analog output channels on the device
- property bus_type
Bus type connection to the device
- property di_lines
List of digital input lines on the device
- property di_ports
List of digital input ports on the device
- property do_lines
List of digital output lines on the device
- property do_ports
List of digital output ports on the device
- static list_connected_devices()[source]
This static method discovers the connected devices.
- Returns
connected devices
- Return type
list of
pymanip.daq.DAQmx.DAQDevice
objects
- property location
Description of the location (PCI bus and number, or PXI chassis and slot)
- property pci_busnum
PCI Bus number
- property pci_devnum
PCI Device number
- property product_category
Device product category (str)
- property product_num
Device product num
- property product_type
Device product type
- property pxi_chassisnum
PXI Chassis number
- property pxi_slotnum
PXI Slot number
- pymanip.daq.DAQmx.print_connected_devices()[source]
This function prints the list of connected DAQmx devices.
- pymanip.daq.DAQmx.read_analog(resource_names, terminal_config, volt_min=None, volt_max=None, samples_per_chan=1, sample_rate=1, coupling_types='DC', output_filename=None, verbose=True)[source]
This function reads signal from analog input.
- Parameters
resources_names – names from MAX (Dev1/ai0)
samples_per_chan (int) – Number of samples to be read per channel
sample_rate (float) – Clock frequency
coupling_type (str, or list) – Coupling of the channels (“DC”, “AC”, “GND”)
output_filename (str, optional) – If not None, file to write the acquired data
verbose (bool, optional) – Verbosity level. Defaults to True (unlike in Fluidlab)
If the channel range is not specified, a 5.0 seconds samples will first be acquired to determine appropriate channel range (autoset feature).
Scope acquisition module (pymanip.daq.Scope
)
This module implements a read_analog()
similar to that of the
DAQmx
module, but for Scope devices.
It uses the niScope
module from National Instruments.
- pymanip.daq.Scope.read_analog(scope_name, channelList='0', volt_range=10.0, samples_per_chan=100, sample_rate=1000.0, coupling_type='DC')[source]
This function reads signal from a digital oscillosope.
- Parameters
scope_name – name of the NI-Scope device (e.g. ‘Dev3’)
channelList (str) – comma-separated string of channel number (e.g. “0”)
volt_range (float) – voltage range
samples_per_chan (int) – number of samples to read per channel
sample_rate (float) – for 5922 60e6/n avec n entre 4 et 1200
coupling_type (str) – ‘DC’, ‘AC’, ‘GND’
Asynchronous acquisition
The pymanip.aiodaq
implements acquisition cards in a similar manner as pymanip.video
for cameras. The acquisition system (currently DAQmx or Scope) are represented with a single
object-oriented interface, to allow to easily switch between different cards, and possibly acquire on
several systems concurrently.
In addition, it provides a full GUI with both oscilloscope and signal analyser tools. This oscilloscope GUI can be invoked directly from the command line (see Oscilloscope):
$ python -m pymanip oscillo
Like the other sub-modules in pymanip
, it is built with python standard asyncio
module,
so it can be easily mixed up with pymanip.asyncsession
, pymanip.aioinstruments
and
pymanip.video
.
Usage
To use the module, simply instantiate one of the concrete class, DAQSystem
or ScopeSystem
, and use their context manager. Then, the configuration
is done with the methods such as add_channel()
and
configure_clock()
, and reading is initiated and performed with the
start()
, read()
and
stop()
methods.
Example with a Scope device:
import asyncio
from pymanip.aiodaq import TerminalConfig
from pymanip.aiodaq.scope import ScopeSystem, possible_sample_rates
async def main():
with ScopeSystem('Dev3') as scope:
scope.add_channel('0', TerminalConfig.RSE,
voltage_range=10.0)
scope.configure_clock(sample_rate=min(possible_sample_rates),
samples_per_chan=1024)
scope.start()
d = await scope.read(tmo=1.0)
await scope.stop()
return d
asyncio.run(main())
Implementation of asynchronous acquisition
Asynchronous Acquisition Card (pymanip.aiodaq
)
This module defines an abstract base class for asynchronous communication with acquisition cards. This is used by the live oscilloscope command line tool.
Concrete implementations are:
pymanip.aiodaq.daqmx.DAQmxSystem
for NI DAQmx cards;pymanip.aiodaq.scope.ScopeSystem
for NI Scope cards.
In principle, other library bindings could be implemented.
- class pymanip.aiodaq.AcquisitionCard[source]
Base class for all acquisition cards. The constructor takes no argument. Channels are added using the
add_channel()
method, and the clock is configured with theconfigure_clock()
method.- add_channel(channel_name, terminal_config, voltage_range)[source]
This method adds a channel for acquisition.
- Parameters
channel_name (str) – the channel to add, e.g. “Dev1/ai0”
terminal_config (
TerminalConfig
) – the configuration of the terminal, i.e. RSE, NRSE, DIFFERENTIAL or PSEUDODIFFERENTIALvoltage_range (float) – the voltage range for the channel (actual value may differ)
- configure_clock(sample_rate, samples_per_chan)[source]
This method configures the board clock for the acquisition.
- configure_trigger(trigger_source=None, trigger_level=0, trigger_config=TriggerConfig.EdgeRising)[source]
This method configures the trigger for the acquisition, i.e. internal trigger or triggered on one of the possible channels. The list of possible channels can be obtained from the
possible_trigger_channels()
method.- Parameters
trigger_source (str) – the channel to use for triggering, or None to disable external trigger (switch to Immeditate trigger). Defaults to None.
trigger_level (float) – the voltage threshold for triggering
trigger_config (
pymanip.aiodaq.TriggerConfig
, optional) – the kind of triggering, e.g. EdgeRising. Defaults to EdgeRising.
- possible_trigger_channels()[source]
This method returns the list of channels that can be used as trigger.
- async read_analog(resource_names, terminal_config, volt_min=None, volt_max=None, samples_per_chan=1, sample_rate=1, coupling_types='DC', output_filename=None, verbose=True)[source]
This asynchronous method is a high-level method for simple case. It configures all the given channels, as well as the clock, then starts the acquisition, read the data, and stops the acquisition.
It is essentially similar to
pymanip.daq.DAQmx.read_analog()
, except asynchronous and functionnal for other cards than DAQmx cards.- Parameters
resource_names (list or str) – list of resources to read, e.g. [“Dev1/ai1”, “Dev1/ai2”] for DAQmx cards, or name of the resource if only one channel is to be read.
terminal_config (list) – list of terminal configs for the channels
volt_min (float) – minimum voltage expected on the channel
volt_max (float) – maximum voltage expected on the channel
samples_per_chan (int) – number of samples to read on each channel
sample_rate (float) – frequency of the clock
coupling_type (str) – coupling for the channel (e.g. AC or DC)
output_filename (str, optional) – filename for direct writting to the disk
verbose (bool, optional) – verbosity level
- read_analog_sync(*args, **kwargs)[source]
Synchronous wrapper around
pymanip.aiodaq.AcquisitionCard.read_analog()
.
- read_sync(tmo=None)[source]
This method is a synchronous wrapper around
start_read_stop()
method. It is a convenience facility for simple usage.
- property samp_clk_max_rate
Maximum sample clock rate
Concrete implementation with nidaqmx-python (pymanip.aiodaq.daqmx
)
This module implements a concrete implementation of the
AcquisitionCard
class using the nidaqmx
module.
- class pymanip.aiodaq.daqmx.DAQmxSystem[source]
This class is the concrete implementation for NI DAQmx board using the
nidaqmx
module.- add_channel(channel_name, terminal_config, voltage_range)[source]
Concrete implementation of
pymanip.aiodaq.AcquisitionCard.add_channel()
.
- configure_clock(sample_rate, samples_per_chan)[source]
Concrete implementation of
pymanip.aiodaq.AcquisitionCard.configure_clock()
- configure_trigger(trigger_source=None, trigger_level=0, trigger_config=TriggerConfig.EdgeRising)[source]
Concrete implementation of
pymanip.aiodaq.AcquisitionCard.configure_trigger()
- possible_trigger_channels()[source]
This method returns the list of channels that can be used as trigger.
- property samp_clk_max_rate
Maximum sample clock rate
Concrete implementation with niscope (pymanip.aiodaq.scope
)
This module is a concrete implementation of the AcquisitionCard
class using the niscope
module.
Note
We have tested this module only for with a PXI-5922 card.
- class pymanip.aiodaq.scope.ScopeSystem(scope_name=None)[source]
This class is the concrete implentation for NI Scope cards.
- Parameters
scope_name (str) – the name of the scope device, e.g. “Dev1”
- add_channel(channel_name, terminal_config, voltage_range)[source]
Concrete implementation of
pymanip.aiodaq.AcquisitionCard.add_channel()
.
- configure_clock(sample_rate, samples_per_chan)[source]
Concrete implementation for
pymanip.aiodaq.AcquisitionCard.configure_clock()
- configure_trigger(trigger_source=None, trigger_level=0, trigger_config=TriggerConfig.EdgeRising)[source]
Concrete implementation for
pymanip.aiodaq.AcquisitionCard.configure_trigger()
- possible_trigger_channels()[source]
This method returns the list of possible channels for external triggering.
- async read(tmo=None)[source]
Concrete implementation for
pymanip.aiodaq.AcquisitionCard.read()
- property samp_clk_max_rate
Maximum rate for the board clock.
- start()[source]
Concrete implementation for
pymanip.aiodaq.AcquisitionCard.start()
- async stop()[source]
Concrete implementation for
pymanip.aiodaq.AcquisitionCard.stop()
- pymanip.aiodaq.scope.get_device_list(daqmx_devices=None, verbose=False)[source]
This function gets the list of Scope device in the system. If NI System Configuration is available, the list is grabbed from this library. Otherwise, the function attempts to use the nilsdev command line tool.
Because nilsdev returns both DAQmx and Scope devices, the list of DAQmx devices is queried to remove them from the returned list. If the user code has already queried them, it is possible to pass them to avoid unnecessary double query.
Command line tools
The pymanip package provides several command line tools, for saved experimental session introspection and management, and for simple tests of some instruments. These tools can be invoked from the command line using the -m pymanip. For example for the inline manual,
$ python -m pymanip -h
Session introspection and management
The sessions created by pymanip use standard formats for storage:
the synchronous sessions use both HDF5 and raw ascii files. For a session named toto, three files are created: toto.dat, toto.hdf5 and toto.log;
the asynchronous sessions use a SQLite3 database. For a session name toto, one file toto.db is created.
The rationale for having three files in the first case is that HDF5 is not designed for repeated access, and it happens sometimes that the file gets corrupted, for example if the program is interrupted while the program was writting to the disk. The ascii files however are opened in append mode, so that no data can ever be lost.
We switched to a database format for the newer asynchronous session, because such a format is designed for multiple concurrent access, and each transaction is atomic and can be safely rolled back if an unexpected error occurs. The risk of corruption is much less, and we thought it was no longer necessary to keep the ascii files.
Because all those file formats are simple and documented, it is possible to read and inspect them with standard tools, such as h5dump for the synchronous session files, or the sqlite3 command line tool, but it is impractical to use these tools to quickly check the content of some session file. That is why a simple command line tool is provided.
The main command for introspecting saved session is
$ python -m pymanip info session_name
It can read asynchronous sessions, as well as synchronous sessions and old-style OctMI sessions. It is not necessary to happen the filename extensions, e.g. .db for asynchronous sessions. The command will output the start and end dates of the session, as well as a list of saved variables.
Two other commands are provided, but they are specific to synchronous sessions which are stored in HDF5 file format:
Instrument informations and live-preview
Instrument classnames
The names of the available instrument classes available in the pymanip.instruments
module can be conveniently obtained from the command line:
$ python -m pymanip list_instruments
Scanning for instruments
Two command line tools are provided to scan for instruments:
the list_daq sub-command searches for acquisition cards using the NI-DAQmx library;
the scan_gpib sub-command searches for connected GPIB devices using the linux-gpib free library (linux only). On Windows and Macs, this is not useful because one can simply use the GUI provided by National Instruments (NI MAX).
Oscilloscope
The oscillo sub-commands implements a simple matplotlib GUI for the pymanip.aiodaq
module, and allows use any of these channels as oscilloscope and live signal analyser. It is simply invoked with this command
$ python -m pymanip oscillo
then the user is prompted for the channels that can be viewed (on the connected DAQmx and Scope cards).

Live preview
The video sub-command implements live video preview with the pymanip.video
classes. Two GUI toolkits are possible: pyqt or opencv. The desired camera and acquisition parameters must be passed as argument on the command line.
CLI reference
pymanip CLI interface
usage: pymanip [-h]
{info,list_instruments,list_daq,check_hdf,rebuild_hdf,scan_gpib,oscillo,video}
...
command
- command
Possible choices: info, list_instruments, list_daq, check_hdf, rebuild_hdf, scan_gpib, oscillo, video
pymanip command
Sub-commands
info
shows the content of a saved pymanip session
pymanip info [-h] [-q] [-l line] [-p varname] session_name
Positional Arguments
- session_name
name of the saved session to inspect
Named Arguments
- -q, --quiet
do not list content.
Default: False
- -l, --line
print specified line of logged data.
- -p, --plot
plot the specified variable.
list_instruments
List supported instruments
pymanip list_instruments [-h]
list_daq
List available acquisition cards
pymanip list_daq [-h]
check_hdf
checks dat and hdf files are identical
pymanip check_hdf [-h] [-p varname] session_name
Positional Arguments
- session_name
Name of the pymanip acquisition to inspect
Named Arguments
- -p, --plot
Plot the specified variable
rebuild_hdf
Rebuilds a pymanip HDF5 file from the ASCII dat file
pymanip rebuild_hdf [-h] input_file output_name
Positional Arguments
- input_file
Input ASCII file
- output_name
Output MI session name
scan_gpib
Scans for connected instruments on the specified GPIB board (linux-gpib only)
pymanip scan_gpib [-h] [board_number]
Positional Arguments
- board_number
GPIB board to scan for connected instruments
Default: 0
oscillo
Use NI-DAQmx and NI-Scope cards as oscilloscope and signal analyser
pymanip oscillo [-h] [-s sampling_freq] [-r volt_range] [-t level] [-T 0]
[-b daqmx] [-p port]
[channel_name [channel_name ...]]
Positional Arguments
- channel_name
DAQmx channel names
Named Arguments
- -s, --sampling
Sampling frequency
Default: 5000.0
- -r, --range
Channel volt range
Default: 10.0
- -t, --trigger
Trigger level
- -T, --trigsource
Trigger source index
Default: 0
- -b, --backend
Choose daqmx or scope backend
Default: “daqmx”
- -p, --serialport
Arduino Serial port
video
Display video preview for specified camera
pymanip video [-h] [-l] [-i interface] [-b board [board ...]] [-t toolkit]
[-s slice slice slice slice] [-z zoom] [-T trigger] [-w]
[-e exposure_ms] [-d bitdepth] [-f framerate] [-r angle]
[-R roi roi roi roi]
camera_type
Positional Arguments
- camera_type
Camera type: PCO, AVT, Andor, Ximea, IDS, Photometrics
Named Arguments
- -l, --list
List available cameras
Default: False
- -i, --interface
Specify interface
Default: “”
- -b, --board
Camera board address
Default: 0
- -t, --toolkit
Graphical toolkit to use: cv or qt
Default: “qt”
- -s, --slice
Slice image x0, x1, y0, y1 in pixels
Default: []
- -z, --zoom
Zoom factor
Default: 0.5
- -T, --Trigger
Trigger mode
Default: -1
- -w, --whitebalance
Enable auto white balance (for color cameras)
Default: False
- -e, --exposure
Exposure time (ms)
Default: 20
- -d, --bitdepth
Bit depth
Default: 12
- -f, --framerate
Acquisition framerate in Herz
Default: 10.0
- -r, --rotate
Rotate image
Default: 0.0
- -R, --ROI
Set Region of Interest xmin, ymin, xmax, ymax
Miscellaneous
Time utilities (pymanip.mytime
)
This modules contains very simple functions for handling dates and times.
In particular, the datestr2epoch()
is useful to read fluidlab’s session timestamps
which are roughly in RFC 3339 format.
- pymanip.mytime.sleep(duration)[source]
Prints a timer for specified duration. This is mostly similar to
pymanip.asyncsession.AsyncSession.sleep()
, except that it is meant to be used outside a session.- Parameters
duration (float) – the duration for which to sleep
- pymanip.mytime.datestr2epoch(string)[source]
Convert datestr into epoch. Correct string is: ‘2016-02-25T17:36+0100’ or ‘2016-02-25T17:36+01:00’.
UTC+1 or UTC+0100 is also accepted by this function Accepts a single string or a list of string