I want to use the Moku:Pro to calculate the histogram of an input signal during >12h of continuos testing. The sample rate has to be >500kSa/S, so we’re looking at >20 GSa in total. There may be no missed samples. Apart from snapshots roughly every second, I don’t care about the actual measurement data, only the histogram counts.
Program logic would be to periodically get all data since last time it was fetched, calculate the histogram of this chunk, add the counts to the previous sum, discard the measurement data and repeat.
My first attempt would be to use the Datalogger in streaming mode. From reading the API documentation, I think get_stream_data() always returns all data from the start of the stream, is this correct? This would mean, all data points must be stored either inside the Moku:Pro or on the host computer and thus, maximum duration would be limited by available disk space and/or memory.
But what exactly does get_chunk() do? Or better: What exactly is a chunk in the context of the datalogger API?
How would I best implement this measurement (in Python)?
I feel get_stream_data() should be able to do exactly what you wanted. Would you mind sharing the reason why you think this API call always returns all data from the start of the stream?
get_chunk() is just to retrieve the current data pack inside the Moku:Pro but in binary mode. It is very difficult for users to handle the binary to readable data conversion, so we developed the get_stream_data() API for you to read the converted data.
Hi Hank,
thanks for the quick reply.
I guess it was just a misinterpretation of the Datalogger (Streaming) example.
So the get_stream_data() function just returns all new values since the last call to the function and I could stream data for up to 2^32 seconds as long as I repeatedly fetch the data? What size is the buffer and with the 500 kSa/s how often would I have to call get_stream_data() to prevent a buffer overflow and/or missing data?
You will be able to set the streaming duration of 10,000 hours as long as you can fetch the data fast enough. The buffer for streaming is around 152 MB. To be honest, there is no specific number for data fetching rate, we just have to fetch the data as soon as we can. Therefore, it is necessary to make sure the network connections is stable and fast between the computer and Moku:Pro.
Hi Hank,
meanwhile I was able to test my setup.
Unfortunately, retrieving the data via get_stream_data() is not fast enough. After some minutes, the stream aborts due to a buffer overflow.
This seems to be caused by the conversion from binary to readable data, since the same setup using the get_chunk() function seems to be running fine.
So, two questions:
Do you see any possibility to speed up the get_data_stream() function?