pyserial - CPU usage when servicing 2 queues

18 Views Asked by At

I'm attempting to service a serial port which gets and puts data to two queues (multiprocessing queues, if that matters) sequentially. My logic is that a modern CPU should be able to read some bytes and write some bytes in order much faster than a 57600 baud serial port, but it's not working as I anticipate - it uses 100% of a CPU core. I've already read this post and this post - but they don't quite hit the nail on the head.

If I run this code on a Raspberry Pi, I get 4kB/s (32kbps... about half of the baudrate). If I run it on a laptop, it ALMOST uses a full core but I do get the maximal ~ 8kB/s through.

I open my port like this:

ser1 = serial.Serial('./reader', 57600, timeout=0.00001)

and the function in question is:

print("servicing port!")
while(1):
  if(ser1.in_waiting):
    newbytes = ser1.read()
    tx.put(newbytes)
  time.sleep(0.00001)
  try:
    rxbyte=rx.get(block=False)
    ser1.write(rxbyte)
  except QueueEmpty:
    time.sleep(0.00001)
    continue

I tried the suggestion from the first post, like so (should block until the initialization timeout is hit, if there are no bytes):

def serviceport():
  print("servicing port!")
  while(1):
    newbytes = ser1.read(1)
    tx.put(newbytes)
    if(ser1.in_waiting):
      newbytes=ser1.read()
      tx.put(newbytes)
    try:
      rxbyte=rx.get(block=False)
      ser1.write(rxbyte)
    except QueueEmpty:
      time.sleep(0.00001)
      continue

but that doesn't change anything.

If I increase the sleep times like the 2nd post suggests, I DO reduce CPU usage, but then the datarate drops even further.

What am I missing here?

Cheers, R

0

There are 0 best solutions below