I was just following along with a Youtube tutorial. When I called show(), the bass line was rendered, but the higher voices were not.
from music21 import *
score = stream.Score()
part = stream.Part()
bass_line = stream.Part()
voice1 = stream.Voice()
voice2 = stream.Voice()
notes = ['C4', 'D4', 'E4', 'F4', 'G4', 'A4', 'B4', 'C5']
for notename in notes:
melody_note = note.Note(notename)
voice1.append(melody_note)
harmony_note = melody_note.transpose(-8)
voice2.append(harmony_note)
bass_note = note.Note(notename)
bass_note.octave -= 2
bass_line.append(bass_note)
part.append([voice1, voice2])
# part.insert(0, voice1)
# part.insert(0, voice2)
score.insert(0, part)
score.insert(0, bass_line)
# score.write('musicxml', 'parts.xml')
score.show()
I asked ChatGPT, and it said I should change
part.append([voice1, voice2])
to
part.insert(0, voice1)
part.insert(0, voice2)
but that didn't change anything. It also recommended I write a musicxml file and open that with MuseScore to see if all the parts were there. I did that and checked, and it was still just the bass line.
I have the latest version of music21, and I'm using MuseScore 4 to visualize. MuseScore has been working up to this point. I had to change the environment settings because I'm using MuseScore 4 rather than MuseScore 2. Not sure if that would be a factor.
In my setup there is Musescore3.2.3 installed and it shows three voices in two systems: