Search code examples
amazon-s3ansiblefuses3fs

Multiple entries for the same bucket(default) in the passwd file


I am trying to re-run an Ansible script on an old 3rd party integration, the command looks like this:

- name: "mount s3fs Fuse FS on boot from [REDACTED] on [REDACTED]"
  mount:
    name: "{{ [REDACTED] }}/s3/file_access"
    src: "{{ s3_file_access_bucket }}:{{ s3_file_access_key }}"
    fstype: fuse.s3fs
    opts: "_netdev,uid={{ uid }},gid={{ group }},mp_umask=022,allow_other,nonempty,endpoint={{ s3_file_access_region }}"
    state: mounted
  tags:
    - [REDACTED]

I'm receiving this error:

fatal: [REDACTED]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /home/[REDACTED]: s3fs: there are multiple entries for the same bucket(default) in the passwd file.\n"}

I'm trying to find a passwd file to clean out, but I don't know where to find one.

Anyone recognizes this error?


Solution

  • s3fs checks /etc/passwd-s3fs and $HOME/.passwd-s3fs for credentials. It appears that one of these files has duplicate entries that you need to remove.

    Your Ansible src stanza also attempts to supply credentials but I do not believe this will work. Instead you can supply these via the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.