File/media upload handled differently #890
Replies: 10 comments 7 replies
-
Endpoint is not being handled/changed by Twill, so you probably need to check what Laravel or Flysystem does with it: https://flysystem.thephpleague.com/v1/docs/adapter/aws-s3-v3/ |
Beta Was this translation helpful? Give feedback.
-
So there's definitely something weird here - I'm using Digital Ocean's spaces, and the configuration I use on another project, does not work when using Twill. Ie.:
This does not work in twill, but works normally otherwise. I think it has something to do with the configuration management twill is doing to manage filesystems, and honestly I'm not entirely sure why it does that. Now, I CAN get this to work for media, but not files, by adding my bucket to the endpoint, such as:
But then I'm back at the problem - media works, files do not. It's really bizarre. The thing is, my configuration SHOULD work on the backend, but I think when setting up the frontend uploader, twill is not doing any endpoint setup as you'd expect, or it's thinking the endpoint is all that's needed, when in reality the bucket should be added. I do not believe this is a an aws s3 adapter or flysystem issue, as I'm using both of those on other projects with exact same DO spaces configuration, and no issues. |
Beta Was this translation helpful? Give feedback.
-
Hi guys, this is all due to this change 5894ccc. See this comment: 5894ccc#r40789394 and pull request: #703. My suggestion to avoid a breaking change but officially support DO Spaces would be to add a new config property |
Beta Was this translation helpful? Give feedback.
-
Thanks for the clarification, @ifox - that certainly looks like it would be it. The config idea is an interesting one, and agreed not sure it's the right idea. That said, because that helper checks for function existence first, I think for now I'll just implement my own to resolve it. Hopefully we get a long-term solution soon :) |
Beta Was this translation helpful? Give feedback.
-
Yup, just implemented the function myself - thank you SO MUCH @ifox !!! That was a rough day. haha This definitely needs to be fixed, it needs to be smarter. Basically, if the endpoint does not contain the bucket, add it - else return as-is. Happy to create a PR if needs be. |
Beta Was this translation helpful? Give feedback.
-
What do you guys think about making it full custom? function s3Endpoint($disk = 'libraries')
{
$config = config("filesystems.disks.{$disk}");
// parse the custom endpoint or use the internal one
$endpoint = parse_url(
$config['endpoint']
?? Storage::disk($disk)->getAdapter()->getClient()->getEndpoint()
);
// an optional scheme can be set via config or extracted from the endpoint
$scheme = $config['scheme'] ?? $endpoint['scheme'] ?? '';
// suffixing with scheme is optional
$scheme = $config['prefix_with_scheme'] ?? true
? "$scheme://"
: '';
// prefixing with bucket is optional
$prefix = $config['prefix_with_bucket'] ?? true
? $config['bucket'] . '.'
: '';
// if a custom host is not set, use the one from the endpoint
$host = $config['host'] ?? $endpoint['host'];
// suffixing with root is optional
$suffix = prefixWithSlash(
($config['suffix_with_root'] ?? false)
? $config['root']
: ''
);
// endpoint may contain a path, suffix with it
$path = prefixWithSlash(
($config['suffix_with_path'] ?? true)
? $config['path'] ?? $endpoint['path']
: ''
);
// build the final URL
return $scheme . $prefix . $host . $suffix . $path;
} Max drive config: 'media_library' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'prefix_with_scheme' => env('AWS_PREFIX_WITH_SCHEME', true),
'prefix_with_bucket' => env('AWS_PREFIX_WITH_BUCKET', true),
'suffix_with_root' => env('AWS_SUFFIX_WITH_ROOT', false),
'suffix_with_path' => env('AWS_SUFFIX_WITH_PATH', false),
'host' => env('AWS_HOST'),
'path' => env('AWS_PATH'),
'scheme' => env('AWS_SCHEME', 'https'),
'root' => env('AWS_ROOT', ''),
], Minimal
Results in
Max
Results in
Turning scheme, bucket and root off:
|
Beta Was this translation helpful? Give feedback.
-
That honestly feels like overkill and really over-complicated. The issue is really just whether or not the bucket needs to be included. If an endpoint is provided, and the bucket has not been included as part of the URL, add it. Else leave as-is. Alternatively, craft an endpoint by using something like an S3_ENDPOINT_DOMAIN env variable, and can construct an endpoint from that + the bucket? |
Beta Was this translation helpful? Give feedback.
-
This is the solution I went with (updated based on recent discussions): function s3Endpoint($disk = 'libraries')
{
$endpoint = parse_url(config("filesystems.disks.{$disk}.endpoint"));
$bucket = config("filesystems.disks.{$disk}.bucket");
if (strpos($endpoint['host'], $bucket) !== 0) {
$endpoint['host'] = "$bucket.{$endpoint['host']}";
}
$endpoint['scheme'] .= '://';
return implode('', $endpoint);
} |
Beta Was this translation helpful? Give feedback.
-
I agree it's a lot, but the problem is that these S3 "API compatible" storage (Scaleway is another one) are popping up like rabbits now, and they are never 100% aligned to AWS, so we will end up having a new one, that has a slightly different URL structure, which will end up becoming a new ticket/change. |
Beta Was this translation helpful? Give feedback.
-
Given that the only significant difference between the various "S3-compatibles" seems to be url structure, it doesn't seem like a terribly big ask to support the path-based buckets as an option, which should solve everyone's use case. Specify the custom endpoint, and a boolean for whether or not to use path based - as Flysystem continues to offer. AWS may no longer use it, but Flysystem allows it and it's part of the AWS SDK, and frankly the url structure is separate from the protocol, which is the S3 compatible bit. As it stands presently, I can't use this with minio at all because subdomain endpoints is just not feasible in my development environment. Flysystem has this solved, are you not using it under the hood? Why is this difficult? There's already a provision for setting path-based buckets with flysystem config, it's part of the default laravel filesystem config file for s3:
Why can't we simply expose, and use that option? Jan 16, 2020:
While they may favor host-style in practice at AWS, there doesn't appear to be any indication that they're removing the use path style option. |
Beta Was this translation helpful? Give feedback.
-
I'm not entirely sure what's going on here, but effectively what's happening is that media uploads work fine, but when files are uploaded, it seems to append the bucket to the domain when using a custom endpoint, ie:
S3_ENDPOINT=https://mydomain.nyc3.digitaloceanspaces.com
The requests for file uploads work fine (it uploads the file). It's the subsequent request posting to the server to save the files that fails:
Error executing "ListObjects" on "https://mydomain.mydomain.nyc3.digitaloceanspaces.com
I have tried all sorts of configuration changes and updates, all to no avail.
Any ideas?
Beta Was this translation helpful? Give feedback.
All reactions