NodeStream, Sequelize and Big Data

48 Views Asked by At

I have an application that returns almost 20 thousand records, I was advised to use NodeStream and until "yesterday", everything was ok. Now I have 30 thousand items and it is causing memory problems again on the node.

I wanted to understand, how could this code work until now? If you think about it, shouldn't the return of Sequelize already break the variable, as it has more than 20 thousand results?

const { count, rows: parcelas } = await Parcela.getParcela().findAndCountAll(queryParams);

Full code:

   const { count, rows: parcelas } = await Parcela.getParcela().findAndCountAll(queryParams);

        class StreamFromParcela extends Readable {
            constructor(array) {
                super({ objectMode: true });
                this.array = array;
                this.index = 0;
            }

            _read() {
                if (this.index < this.array.length) {
                    const parcela = this.array[this.index];
                    this.push(parcela);
                    this.index += 1;
                } else {
                    this.push(null);
                }
            }
        }

        const streamParcela = new StreamFromParcela(parcelas);

        let firstChunk = true;
        let isFirstObject = true;
        streamParcela.on('data', (chunk) => {
            if (!firstChunk) {
                res.write(',');
            }
            firstChunk = false;

            if (isFirstObject) {
                res.write('{"parcelas": [');
                isFirstObject = false;
            }

            res.write(Buffer.from(JSON.stringify(chunk)));
            
        });

        streamParcela.on('end', () => {
            res.write(`],"total": ${count}}`);
            res.end();
        });```




I tried several approaches, only the code above worked.
0

There are 0 best solutions below