You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've created a Node FLAC player using Aurora and FLAC.JS that streams FLAC files from an HTTP server.
Streaming from the server on the same Mac works fine. When I do it over the network I start to hit issues. The FLAC files in question are identical.
FLAC.js throws a "STREAMINFO can only occur once" error.
Adding a bit of debug in both the working and failing scenarios I think I've spotted the difference.
As FLAC.js reads the blockHeaders info in readChunk, if it reads the type and size, but there's not enough data (i.e. the value of size) in the stream, it returns from the function. It will then subsequently re-enter this function to carry on reading the data and will read another byte from the stream (because it's still processing blockHeader), therefore advancing the stream a byte into the blockHeader data that it hadn't finished processing last time around.
I've tested a fix locally and will submit a pull request.
Thanks
Rich
The text was updated successfully, but these errors were encountered:
I faced this problem also. In most cases this problem does not appear because AV.FileSource.chunkSize and AV.HTTPSource.chunkSize in the aurora.js equal to 1 MiB by default, this is enough in most cases for parse all headers of a FLAC file without interrupting parsing process and putting extra data into the stream.
For reproduce that issue create aurora.js player instance:
Hi,
I've created a Node FLAC player using Aurora and FLAC.JS that streams FLAC files from an HTTP server.
Streaming from the server on the same Mac works fine. When I do it over the network I start to hit issues. The FLAC files in question are identical.
FLAC.js throws a "STREAMINFO can only occur once" error.
Adding a bit of debug in both the working and failing scenarios I think I've spotted the difference.
As FLAC.js reads the blockHeaders info in readChunk, if it reads the type and size, but there's not enough data (i.e. the value of size) in the stream, it returns from the function. It will then subsequently re-enter this function to carry on reading the data and will read another byte from the stream (because it's still processing blockHeader), therefore advancing the stream a byte into the blockHeader data that it hadn't finished processing last time around.
I've tested a fix locally and will submit a pull request.
Thanks
Rich
The text was updated successfully, but these errors were encountered: