Running the DAQ
Running the DAQ
Many processes have to run in separate instances at the same time for the DAQ to function.
Here, we describe how this state is achieved, such that data can be recorded.
We assume that the DAQ has been set up and all necessary pieces of software have been installed. Furthermore, we assume that the DAQ software is currently in an unmodified state (i.e. no detectors, NIM modules, VME modules, … have recently been added or removed from the setup, necessitating changes to DAQ configuration files etc.). All of these topics are covered on the DAQ Setup page.
Quick start
A tmux
configuration is provided with the ausadaq
super project. The configuration simply divides a tmux
session into various panes, each of which is to have an instance of the DAQ running.
Run
tmux
on the controller. Then, hit CTRL+b
followed by SHIFT+d
to load the tmux
configuration. The resulting tmux
session should look as in the following image
Note that the title of each pane is the command(s) to be run in the given pane. The order in which command(s) are to be run is top-to-bottom; left-to-right.
Note also that the hostname of the board in our mobile DAQ is mot2
, so the title of the pane to the left should in this case read ssh q@mot2 ; vi [...]
.
The following section gives a brief explanation of what is to happen in the one pane on the left and the first four panes on the right.
The main DAQ instances
The 3 drasi processes
First, drasi_readout
on the board as well as the drasi event builder lwrocmerge
and the drasi message handler lwrocmon
on the controller must run.
From within the home directory of q
on the board, do
run_daq_subprocess drasi # to continue running in its terminal session
On the controller, do
run_daq_subprocess eb # to continue runnning in its terminal session
# (...)
run_daq_subprocess mh # to continue runnning in its terminal session
ucesb-related processes
Then, a minimal ucesb
instance is started as a data relay which passes data streams on to any client which connects to it. Online visualisation in go4
/go4cesb
is one such ucesb
-based client. Another minimal ucesb
instance connects to the primary ucesb
instance and writes data to file, when the user wishes to take data.
On the controller, do
run_daq_subprocess relay # to continue runnning in its terminal session
# (...)
run_daq_subprocess go4 # to continue runnning in its terminal session
All parts of the DAQ are now running and data can be recorded.
Data pipeline buffer sizes
(This subsection can most probably safely be skipped.)
For each drasi
- and ucesb
-based processes there generally exist one or more data transport pipelines and one or more data stream pipelines. These pipelines are each assigned a buffer of some size.
Due to the way our DAQ runs – with some of the drasi
- and ucesb
-based processes constantly running and some started and stopped on a whim – data will continually accumulate in the buffer of the data transport pipeline which our file taking process attaches to. When the buffer is full, the oldest data in the buffer are removed in order to make room for the newst data at the other end of the buffer (i.e., it is a FIFO). When the file taking process attaches to the data transport pipeline, the data transport pipeline’s buffer is continually flushed into the file taking process which enables the actual recording of data to file.
If the buffer is very large compared with the size of each event and the average event rate, one could imagine that data from several minutes or hours ago are still present in the buffer when an experimenter wishes to start taking data to file.
Thus, it is important to be aware of the buffer size of the data transport pipeline.
The size is encoded into the ucesb relay
process, the invoking command of which can be found in the file controller/bin/run_daq_subprocess
in the definition of RELAY
, e.g.:
# (...)
RELAY="${UCESB_BASE_DIR}/empty/empty --stream=${CONTROLLER_IP}:${DATA_STREAM_PORT} --quiet --colour=no --allow-errors --server=trans,bufsize=1024ki,flush=1 --server=stream,bufsize=16M,flush=1"
# (...)
The argument --server=trans,bufsize=1024ki,flush=1
tells us that the data transport pipeline has a buffer size of 1024KiB which is the minimum reasonable buffer size; if the buffer size is reduced further, ucesb
will create as many data streams as are necessary to satisfy (buffer size)*(streams) >= 1024KiB
. Typically, our event sizes are of order 100KiB, so roughly the latest 10 events (if any) seen by the DAQ will be present in the buffer when a new file taking process is started. In most practical circumstances, this should not be a problem! The part of the argument stating flush=1
means that the ucesb relay
process will attempt to flush the buffer every 1 seconds.
Recap
Three drasi
-related processes and two ucesb
-related processes have to run for the DAQ to be ready to record data — and data recording is itself yet another ucesb
-related process.
The remaining sections on this page describe various useful tools/procedures for when the DAQ is running.
Monitoring scalers
On the board, do
scalers
In order for scalers
to be legible, its invoking terminal usually has to take up half a monitor’s width.
Changing active triggers and/or trigger reduction
In the tmux
session in the above image, kill the run_daq_subprocess drasi
session if it is running.
Then, edit the file trigger.trlo
located in the home directory of q
on the board. Be on the lookout for statements like tpat_enable
and trig_red
and change these to your needs.
Then, from within the home directory of q
on the board, restart drasi
run_daq_subprocess drasi # to continue running in its terminal session
It may be prudent to check if the other DAQ subprocesses need restarting as well.
mesycontrol
We use Mesytec’s software mesycontrol to set gains, adjust trigger thresholds, adjust pole zero, etc. on our Mesytec shapers via an MRC-1.
The executable mesycontrol_gui
launches the software.
pfeiffer
A simple serial comms python script reads from a pfeiffer pressure gauge over USB every ~5 seconds and posts the result to our influx database, which grafana can then read from.
The environment variables PFEIFFER_HOST
, PFEIFFER_USER
and PFEIFFER_PASS
must be defined in daqenv
for the script to work.
The executable pfeiffer
runs the python script.
To have the program run in the background, do
pfeiffer &
Taking data
Consult the next page in the series of pages concerning our DAQ back here.