I have a script that runs that creates multiple vlc instances and plays through a list of videos on each instance (different videos for each instance, most of which are streams from URLs).
The issue I'm facing is that on one of my machines everything runs as expected (laptop with integrated graphics, 16GB RAM). On another one of my machines I get crashes with exception codes like 0xC0000096 and 0xC0000005. This machine is more beefy (desktop with dedicated gpu (3080), processor doesn't have any integrated graphics support, 32GB RAM).
In the output of the beefy machine I see things like:
[0000018c4a52eb60] direct3d11 vout display error: SetThumbNailClip failed: 0x800706f4 [0000018c4a27f6b0] avcodec decoder: Using D3D11VA (NVIDIA GeForce RTX 3080, vendor 10de(NVIDIA), device 2206, revision a1) for hardware decoding
This makes me think there's something going on with the gpu (driver/codec/something else?)
Here's my code for the player:
class Player(Tk.Frame):
"""The main window has to deal with events.
"""
def __init__(self, parent, index, vids=None, title=None):
parent.geometry('478x1000+'+str(index*478)+'+0')
Tk.Frame.__init__(self, parent)
self.ins = vlc.Instance()
self.player = self.ins.media_player_new()
self.player.audio_set_mute(True)
self.player.set_hwnd(self.winfo_id())
self.parent = parent
self.play = True
self.vids = vids
self.num_vids = len(vids)
self.thread = None
self.should_skip = False
self.count = 0
self.playing = set([1, 2, 3, 4])
self.timer = 0
self.media = None
self.callback_id = None
self.url_title = None
self.cur_vid_len = 0
self.parent.bind('<Return>', self.skip)
self.after(1, self.start_new_video)
def skip(self, event):
self.should_skip = True
def start_new_video(self, event=None):
self.url_title = self.vids[self.count % self.num_vids]
self.media = self.ins.media_new(self.url_title[0])
self.media.get_mrl()
self.player.set_media(self.media)
self.parent.title(self.url_title[1])
self.player.play()
self.timer = 0
self.cur_vid_len = 0
self.after(1, self.on_tick)
def on_tick(self, event=None):
if self.player.get_state() in self.playing and self.timer < 60 and not self.should_skip:
self.timer += 1
self.callback_id = self.after(1000, self.on_tick)
else:
if self.player is not None and self.player.is_playing():
self.player.stop()
if self.media is not None:
self.media.release()
self.media = None
self.should_skip = False
self.adjusted_vid_time = False
self.count += 1
if self.callback_id is not None:
self.after_cancel(self.callback_id)
self.callback_id = None
self.after(1, self.start_new_video)
and how I'm creating the players:
root = Tk.Tk()
root.withdraw()
num_windows = 2
for i in range(num_windows):
top = Tk.Toplevel(root)
print(str(int(i * len(video_list) / num_windows)), str(min(int((i + 1) * len(video_list) / num_windows), len(video_list))))
Player(top, i, video_list[int(i * len(video_list) / num_windows):min(
int((i + 1) * len(video_list) / num_windows), len(video_list))]).pack(fill="both", expand=True)
root.mainloop()
Basically I create a certain number of windows and each window is provided a list of videos. Each player then plays each video for a certain amount of time (in this case around 60 seconds) and then moves on to play the next video.
I've tried debugging the code for actual issues that might be causing memory access issues and I can't find anything. I can run this code for 30+ minutes without issue on the laptop, where the desktop fails within a matter of seconds (usually around when the player is trying to queue up another video). I've tried to debug the process of switching videos and play and it doesn't seem to help much.
Another thing I've noticed is that if I only run with one video it doesn't seem to trigger the error. Is it possible that running multiple causes issues when using the GPU rather than integrated graphics?