Measurement Computing   Easy to Use | Easy to Integrate | Easy to Support catalog banner

Example: Linux OS, AInScan continuous written to .csv file in...

Expand / Collapse
 

Example: Linux OS, AInScan continuous written to .csv file in Python


Here is a ULDAQ (Universal Library for Linux) Python example.
It is a modified version of AInScan.py with the addition of continuous scan and writes the data to file.
It demonstrates 'double buffering' a term we use to explain how to stream continuous data with no missing samples.

A brief description:
When running an analog input scan, you specify a channel (or channels), a number of samples, and a rate.  Sure there are other parameters, but for now we will just use these 3.
This results in a finite number of samples collected at a user defined rate.

To make the scan run continuously, you add the scan option ScanOption.CONTINUOUS. But now there is a problem of how to manage the continuous data stream so as not to lose any data nor copy the same data repeatedly.  

This is solved by doubling the number of samples in the non-continuous scan. 
For example, if you want to collect 1000 samples per channel at 1000 samples per second continuously, you would specify in your app the number of samples to collect to be 2000 instead of 1000. The 2000 is the buffer doubled or "double buffer".  Using the uldaq library function a_in_scan() in a timed loop, monitor the paramter transfer_status.current_index.  This parameter will loop around from 0 to 1999 which is the number of samples you set the scan funtion to collect.  By keeping track of the midpoint you know which samples you can access.  
If the scan's pointer (current_index) is above the midpoint (999) you can comfortably access samples 0-999 or if the current_index is below the midpoint you can comfortably access samples 1000 -1999. This assumes you have 1 second to access one half of the buffer and manipulate the data.  

In short, what ever half of the buffer the app is writing to, you can read from the other half. You have about 1 second to 'do what you gotta do' with the data meaning manipulate it, save it, or make decissions on it.

With regard to the timer, there is a question of "How often the timer should make a call to get_scan_status?"
Using our above example, of 1000 samples per second, and a buffer of 2 seconds of data making the entire buffer 2000 samples.  We want to know where we are in reference to the middle of the buffer, the 999th sample.   The goal is to check the location in the buffer often enough so you don't miss a half but not so often you are wasting processing power.  Think of it like solving for Nyquist sampling rate.  The app has a 2 second buffer.  If you checked it every second, more often than not, the current_index would let you know if you are in the upper or lower half of the buffer.  But there is a chance you would eventually miss a half of buffer.  By checking the buffer every 1/2 second so you never miss a half-buffer.

In the attached example, immediately below the call to a_in_scan() is a small segment of code added to give the app time to populate the lower half of buffer with data.  It only runs once each time you start the acquisition.  I found this to be useful so as to not get an empty half-buffer of data.  Here is the code:

status, transfer_status = ai_device.get_scan_status()

while (transfer_status.current_index < buffer_mid_point):
    status, transfer_status = ai_device.get_scan_status()

As you can see, you call get_scan_status() to get the values back from the scan and run a while loop comparing the current_Index to the middle of the buffer variable (buffer_mid_point).


The last bit of code is a variable called buff_check, it is set to a default value of 1.  This variable is used to determine if I have already processed that particular half of the buffer.  Its goal is to make sure we don't process the same half of buffer more than once.

While the default example reads data from 4 channels (0 to 3) this example shows only one channel (channel 0) so as to better understand how it works.

The best way to see this app in action is to provide a repeating ramp to your Measurement Computing data acquision device save the data to file, then load the file into a spreadsheet application such as Microsoft© Excel® or LibreOffice® Calc and then graph the data collected in a line graph.  if you see a repeating ramp with no misplaced data points, it's working!

The data is written to a local file with the extention .csv.

I hope you find this example useful.
If you have any questions, please fee free to contact me at:
508-946-5100 and press 2 for Technical Support,

Happy programming,
Jeff


Rate this Article:

Attachments


a_in_scan_continuous_to_file.py a_in_scan_continuous_to_file.py (7.79 KB, 46 views)

Add Your Comments


For comments email TechSupport@mccdaq.com.

Details
Article ID: 50760

Last Modified:11/6/2018 9:43:06 AM

Article has been viewed 113 times.

Options