I am trying to have 2 scripts running in parallel and feeding one with the other.
First I trained a model to decode different gestures. I followed the tutorial right here: https://www.youtube.com/watch?v=yqkISICHH-U
That script opens the webcam and decodes the gestures I am doing, and create a new variable when the same movement is decoded 3 consecutive times (called mvt_ok). At that time I wish to send the information to another script that will be an experimental task develloped on psychopy (a python tool to make psychology experiments). Basically, as soon as the first script (gestures detection with the webcam) feeds the second one, I want to present another stimulus for the second one (psychopy task).
To summarise, I wish to open the video, then start the script (psychopy) and present the first simulus, then a movement is expected to be detected with the video. This information should be fed to the psychopy script to change stimulus.
So far I am really far of doing that and I have just been able to send movement ok to another script with a function such as the one following:
def f(child_conn,mvt_ok):
print(mvt_ok)
Actually I am not sure how I could reuse the mvt_ok variable to feed it to the my psychopy script.
I won't put all the lines for the part handling the gesture recognition because it is maybe too long but the most crucial ones are here:
if __name__ == '__main__':
parent_conn,child_conn = Pipe()
sentence = []
while cap.isOpened():
ret, frame = cap.read()
image_np = np.array(frame)
input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
# detection_classes should be ints.
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=5,
min_score_thresh=.8,
agnostic_mode=False)
cv2.imshow('object detection', cv2.resize(image_np_with_detections, (800, 600)))
if np.max(detections['detection_scores'])>0.95:
word = category_index[detections['detection_classes'][np.argmax(detections['detection_scores'])]+1]['name']
sentence.append(word)
if len(sentence)>=3:
if sentence[-1]==sentence[-2] and sentence[-1]==sentence[-3]:
print('ok')
mvt_ok=1
p = Process(target=f, args=(child_conn,mvt_ok))
p.start()
p.join()
if cv2.waitKey(10) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
break
One way to do this is with pure Python is to:
Queue()that Python objects can be sent through much more simply than via socketsThere is a very good example here.
Another, possibly easier, option is MQTT. You could install
mosquittoas an MQTT broker. Then your webcam can "publish" detection events and your stimulus can "subscribe" to detections and get notified. And vice versa. MQTT allows for multi-megabyte messages so if big messages are needed, I would recommend it.The code for the video acquisition end might look like this:
And for the stimulus end, it might look like this:
And I used a
settings.jsoncontaining this:Redis also supports pub/sub and is simple, fast and lightweight. The could would be structured very similarly to that above for MQTT. You can also just share a variable, or a list, or an atomic integer, or a set with Redis.
You could also use a simple UDP message between the processes if you don't want to pass big, complicated Python objects. It should be very reliable if both processes are on the same host and probably will allow up to 1kB or so of data per message. This is pure Python with no extra packages or modules or servers being needed.
The video acquisition might look like this:
And the stimulus code might look like this:
And I used a
settings2.jsoncontaining this:Another, pure Python way of doing this is with a multiprocessing connection. You would need to start the
stimulusprocess first if using this method - or at least the process in which you put the listener. Note that you can send Python objects using this technique, just change toconn.send(SOMEDICT or ARRAY or LIST)The video acquisition might look like this:
And the stimulus end might look like this:
And your
settings2.jsonwould need to look like this:Some of the above ideas are very similar the examples I gave in this answer although the purpose is slightly different.
None of these methods are locking/blocking, so you can happily run one program without the other needing to be running.