Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mounting DigitalOcean spaces #157

Open
duanemalcolm opened this issue Sep 27, 2017 · 2 comments
Open

Mounting DigitalOcean spaces #157

duanemalcolm opened this issue Sep 27, 2017 · 2 comments

Comments

@duanemalcolm
Copy link

Hello,

I'm trying to use yas3fs to mount the new DigitalOcean object storage called Spaces. Has anyone had any success with this?

Cheers, Duane.

I run the command:
yas3fs s3://test /home/spaces --s3-endpoint nyc3.digitaloceanspaces.com -d

When I list the contents of the directory:

root@worker1:~# ls spaces 
ls: cannot access 'spaces': Bad address

I get the following output upon trying to mount:

MainThread 2017-09-27T13:04:35.419 DEBUG options = Namespace(aws_managed_encryption=False, buffer_prefetch=0, buffer_size=10240, cache_check=5, cache_disk_size=1024, cache_entries=100000, cache_mem_size=128, cache_on_disk=0, cache_path='', debug=True, download_num=4, download_retries_num=60, download_retries_sleep=1, expiration=2592000, foreground=False, gid=None, hostname=None, id=None, log=None, log_backup_count=10, log_backup_gzip=False, log_mb_size=100, mkdir=False, mountpoint='/home/spaces', mp_num=4, mp_retries=3, mp_size=100, new_queue=False, new_queue_with_hostname=False, no_allow_other=False, no_metadata=False, nonempty=False, port=None, prefetch=False, prefetch_num=2, queue=None, queue_polling=0, queue_wait=20, read_only=False, read_retries_num=10, read_retries_sleep=1, recheck_s3=False, region='us-east-1', requester_pays=False, s3_endpoint='nyc3.digitaloceanspaces.com', s3_num=32, s3_retries=3, s3_retries_sleep=1, s3_use_sigv4=False, s3path='s3://test', st_blksize=None, topic=None, uid=None, umask=None, use_ec2_hostname=False, with_plugin_class=None, with_plugin_file=None)
MainThread 2017-09-27T13:04:35.419 INFO Version: 2.3.5
MainThread 2017-09-27T13:04:35.419 INFO s3-retries: '3'
MainThread 2017-09-27T13:04:35.420 INFO s3-retries-sleep: '1' seconds
MainThread 2017-09-27T13:04:35.420 INFO S3 bucket: 'test'
MainThread 2017-09-27T13:04:35.420 INFO S3 prefix (can be empty): 'test'
MainThread 2017-09-27T13:04:35.420 INFO Cache entries: '100000'
MainThread 2017-09-27T13:04:35.420 INFO Cache memory size (in bytes): '134217728'
MainThread 2017-09-27T13:04:35.420 INFO Cache disk size (in bytes): '1073741824'
MainThread 2017-09-27T13:04:35.420 INFO Cache on disk if file size greater than (in bytes): '0'
MainThread 2017-09-27T13:04:35.421 INFO Cache check interval (in seconds): '5'
MainThread 2017-09-27T13:04:35.421 INFO Cache ENOENT rechecks S3: False
MainThread 2017-09-27T13:04:35.421 INFO AWS Managed Encryption enabled: False
MainThread 2017-09-27T13:04:35.421 INFO AWS Managed Encryption enabled: False
MainThread 2017-09-27T13:04:35.421 INFO Number of parallel S3 threads (0 to disable writeback): '32'
MainThread 2017-09-27T13:04:35.421 INFO Number of parallel downloading threads: '4'
MainThread 2017-09-27T13:04:35.421 INFO Number download retry attempts: '60'
MainThread 2017-09-27T13:04:35.422 INFO Download retry sleep time seconds: '1'
MainThread 2017-09-27T13:04:35.422 INFO Number read retry attempts: '10'
MainThread 2017-09-27T13:04:35.422 INFO Read retry sleep time seconds: '1'
MainThread 2017-09-27T13:04:35.422 INFO Number of parallel prefetching threads: '2'
MainThread 2017-09-27T13:04:35.422 INFO Download buffer size (in KB, 0 to disable buffering): '10485760'
MainThread 2017-09-27T13:04:35.422 INFO Number of buffers to prefetch: '0'
MainThread 2017-09-27T13:04:35.422 INFO Write metadata (file system attr/xattr) on S3: 'True'
MainThread 2017-09-27T13:04:35.422 INFO Download prefetch: 'False'
MainThread 2017-09-27T13:04:35.422 INFO Multipart size: '104857600'
MainThread 2017-09-27T13:04:35.423 INFO Multipart maximum number of parallel threads: '4'
MainThread 2017-09-27T13:04:35.423 INFO Multipart maximum number of retries per part: '3'
MainThread 2017-09-27T13:04:35.423 INFO Default expiration for signed URLs via xattrs: '2592000'
MainThread 2017-09-27T13:04:35.423 INFO S3 Request Payer: 'False'
MainThread 2017-09-27T13:04:35.423 INFO Cache path (on disk): '/tmp/yas3fs/test'
MainThread 2017-09-27T13:04:35.435 INFO Unique node ID: 'be86a51e-a4e2-42c3-8066-c3b1e9ff4b2b'
@paolobarbolini
Copy link

I tried it on a bucket in ams3 and it seems to be working perfectly.

@duanemalcolm
Copy link
Author

Thanks. I just got it working for S3FS so I'll give yas3fs another go. It should work. This reply was timely because it reminded me of YAS3FS. Cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants