Is it correct to use attributes in pipe methods?

43 Views Asked by At

In the following code section we have a source process that transmits information by reading elements from a tuple and writing the corresponding elements one by one on an anonynous pipe. A second process plays the role of transformer process, i.e. reads what the source process writes in the anonymous pipe, tokenizes (distinguishes words based on white spaces) messages and writes the result in a FIFO (aka named pipe).. I wanted to ask, is it correct that I have received_messages as attribute in the transformer process, or should the transformer process have no attributes?

I tried this code :

def source_process():
   received_messages = []
   p = os.pipe() #p[0] is the reader and p[1] is the writer in the returned tuple of os.pipe()
   if not p:
       print ("pipe call!")
       os.close(p[0])
       os.close(p[1])
       os._exit(1)
   pid = os.fork()
   if pid == -1:
       print (os.strerror("fork call"))
       os.close(p)
       os._exit(1)
   elif pid == 0: #child process
       os.close (p[0]) # child does not read because here we close p[0]
       os.write (p[1], MESSAGES[0].encode()) # encode for converting strings to bytes
       os.write (p[1], MESSAGES[1].encode())
       os.write (p[1], MESSAGES[2].encode())
       os.write (p[1], MESSAGES[3].encode())
       os.close(p[1])
   else: # if the code reaches this line pid!=0 and thus we are in the parent process
       os.close(p[1])  # parent does not write because here we close p[1]
       for message in MESSAGES:
         read_message = os.read(p[0], len(message));
         received_messages.append(read_message.decode())
       os.wait()
       os.close(p[0])
   return received_messages

def transformer_process(received_messages):
 messages_splited=[]
 for received_message in received_messages:
   word=received_message.strip().split()
   messages_splited.append(word)
 try:
       # Open the named pipe for writing
       with open(fifoname, 'w') as pipeout:
           for message_splited in messages_splited:
               # Write each tokenized word separately to the named pipe
               for word in message_splited:
                   pipeout.write(word + '\n')
 except Exception as e:
       print(f"Error in child process: {e}")
 finally:
       os._exit(0)    
   
 return messages_splited

I'm not asking for a solution just tell me if it is correct or should I change it

0

There are 0 best solutions below