-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPFS support #154
Comments
Agree with almost everything! Some comments:
I don't think this is a super strict requirement. We upload things to Google S3 and don't re-download the file to check that it is actually the file we uploaded. Piñata is a much smaller provider, but if we are building our service on top of them we should probably trust them as a service provider. We could still do the pre-calculation of the CID for other reasons though, like giving a CID to every asset even if it's not saved on IPFS which can allow for some more homogenous use of the CIDs as identifiers. I still wouldn't put this as a requirement for this first integration though. VOD Input
VOD output
I don't think Here I just want to make a clear distinction that a "livepeer-defined URL to represent an IPFS pinning service as an Object Store" should not use the So IMO
It will be more useful to get the IPFS CID or Btw we also have our own branded IPFS gateway through Piñata, under
For VOD I think it's fine if all the files aren't in the same IPFS directory. Can just have a playlist file pointing to other independent files on IPFS as well, and it's possible cause we can just store the playlist file after everything else. Much trickier for livestreams indeed, but I'd argue that it doesn't make sense for Livestreams anyway if it is a "content address" that both is not permanent and changes all the time. Might as well have dynamic playlists on that case and only the segments stored on IPFS (if we ever do want IPFS-based playback). I'd also say not to spend a lot of time on this. IPFS playback is not practical right now, and even tho they are getting better we should focus on what works today. So IMO starting with only the original "MP4" files on IPFS is enough (and that's all that we have on Studio today as well, apart from NFT metadata we won't be handled by Catalyst anyway). |
More concrete examples for input/output:
{
"url": "https://storage.google.cloud/my-bucket/my-file.mp4",
"output_locations": [
{
"type": "object_store",
"url": "pinata://key:[email protected]?name=asset_12345"
// pinata_access_key field is not necessary. it's embedded in the URL
"outputs": {"source_mp4": true}
}
]
} (omitted unrelated stuff. hostname and querystring of the object store URL are also very debatable, I don't love them myself, but I'm including everything I said in the example for completeness)
{
"status": "completed",
"outputs": [
{
"type": "object_store"
"manifest": "bafybeien324vbmmtfwe6nuiyfogs3lka3x4mo2rwm32te2ajlfvaeslk7y",
"videos": [
{
"type": "not sure I know what this type means",
"size": 12345,
"location": "https://ipfs.livepeer.com/ipfs/bafybeien324vbmmtfwe6nuiyfogs3lka3x4mo2rwm32te2ajlfvaeslk7y"
}
]
}
]
} Just noticed that we have both this Also a side note, we might need to rethink this |
Thanks for very useful input @victorges. Finally, I got some idea how it should work on catalyst-api side. It makes sense to focus on single-file VOD first, and, if later we will need to implement HLS and live streaming, we already have initial research documented here. Let's return both CID and full gateway URL from catalyst-api, and, maybe, only CID from go-tools to not suggest a specific gateway. On naming, folder wrapping should work fine for immutable content, to have the gateway URL ending with file name, I'll implement that.
You are probably right, we can trust the provider at this stage. However, I believe, that the ultimate goal is to provide a fully verifiable trustless flow for users who need that. Also, when low latency streaming is implemented on B-O-T, we'll open the path to per-video-packet verification. It will likely require streaming verification on the storage side as well. Maybe @yondonfu could chime in on that. |
Chiming in here.
Focusing on using IPFS to persist source mp4 assets to match the status quo functionality in Studio first makes sense to me. As long as we have access to the source assets, we can always generate derived assets as is needed (i.e. a source HLS playlist, transcoded renditions, etc.).
In this case, I see two trust relationships:
For 1, in the short term, we should be able to trust reputable gateway providers. Later on, we may want to have more flexibility with using gateway providers where some of the providers are not trusted in which case we could look into verifiable retrieval from gateways in the Catalyst integration. For 2, I think we can address this with the verifiable video/RMID work that we've been investigating - the basic idea being that the user calculates a unique hash ID for the raw media (i.e. video, audio, metadata tracks + relative timestamps) of an asset agnostic to the container, checks that this ID matches the one calculated by Studio/Catalyst and uses the ID to check the content returned for a request with the ability to check that the raw media is correct even if the response is a transmuxed version. And for the case where a transcoded rendition is returned there would be a signed attestation. This is being fleshed out for Q4! |
General considerations
VOD
Input
Content-type
header in HTTP GET response, we can use that, but it's not a mandatory requirement for gateway implementations.Output
ipfs://
URL support should be added to go-tools andcatalyst-uploader
. It will use Pinata API to pin the file and return Pinata IPFS gateway URL. Content hash matching should be implemented as well.catalyst-uploader
in the playlistLive
The text was updated successfully, but these errors were encountered: