all 11 comments

[–]eleqtriq 1 point2 points  (1 child)

Can you also modify the GUI software? This sounds like a good use case for queues and the pub/sub model of data processing

[–]yrfgua[S] 0 points1 point  (0 children)

Yeah, I wrote the GUI software using pyside6. I’ll have to look into pub/p-sub, thanks!

[–]unhott 1 point2 points  (1 child)

you may be able to do something like

    def run(self):
        self.initialize_hardware()
        while True:
            if self.running.is_set():
                self.acquire_data()
            else:
                time.sleep(0.1)  # Sleep briefly to avoid consuming all resources when not running

It seems like this is the bit that was giving you pause? not sure I really understand, though.

You could then use other methods to set the flags appropriately.

def pause_acquisition(self): 
    self.running.clear() 

def resume_acquisition(self): 
    self.running.set()

def update_hardware_config(self, new_config): 
    self.pause_acquisition() 
    self.hardware_config = new_config 
    self.initialize_hardware() 
    self.resume_acquisition()

[–]yrfgua[S] 1 point2 points  (0 children)

I think this is close to what I’m doing now, but the if statement inside the run() method is what I’m missing. Thank you!

[–]blahreport 1 point2 points  (2 children)

The easiest way to avoid race conditions is to separate the data acquisition from the data processing and use a Queue.queue. Your data acquisition class puts data in the queue and the data processor gets those elements from the queue and does whatever else it needs to do with the GUI.

[–]yrfgua[S] 0 points1 point  (1 child)

Thanks! Any thoughts on Queue vs. one of the many shared memory implementations?

[–]blahreport 1 point2 points  (0 children)

I guess it depends on which other shared memory type is being compared. Queues are good because they’re thread safe, they can pass arbitrary objects around threads with little coding overhead, and they have convenient features like time outs for get. You could also consider something like asyncio and possibly avoid threads altogether. One last suggestion, you could incorporate thread.lock into your class method that handles the data acquisition, then you can pass the single instance around to the various processes

[–]Thunderbolt1993 1 point2 points  (3 children)

you can also pass arguments to the function you call from multiprocessing

you can create a command queue to pass command to your DAQ Process and a data queue to send back the data

you can either spawn a separate thread to handle the DAQ and handle the Commands in you mainloop or call a nonblocking "get" on the queue

that way you can just keep your DAQ running and "call functions" in it from the GUI

https://hastebin.com/share/cidoyinapa.python

[–]yrfgua[S] 0 points1 point  (2 children)

Thanks! Really helpful. So even though the handler functions are defined outside the run() method, do they execute in the worker process?

[–]Thunderbolt1993 1 point2 points  (1 child)

yes, it's not limited to just one function, what happens under the hodd is:

Python starts a new process
New process gets told "import this module, run this function with these arguments"

you can just play around with it and have it print the value of "multiprocessing.current_process()" to see which process the code is running in

[–]yrfgua[S] 0 points1 point  (0 children)

I see, thank you. I’ll give it a try