I am sure the suggestions here will be to use an S3 bucket and I am aware of this. My question is a bit more difficult, from what I am gathering, in that I want to use Pygsheets, a python library, to write to a Google Sheet. However, after getting through all the deployment and layer steps... what is stopping me is a pesky .json file needs to be read by one of the functions in Pygsheets. I do believe it is reading and writing something else on the fly which may not be allowed in and of itself but I am asking regardless.
Link directly to the function that needs to be used in conjunction with the secret.json from Google: Pygsheets Github
Sample code:
print("-->Using the library pygsheets to update...")
print(f"-->Accessing client_secret.json")
gc = pygsheets.authorize(service_file='client_secret.json')
print(f"-->Opening Google Sheets")
#open the google spreadsheet
sh = gc.open_by_url('https://...')
print(f"-->Accessing")
#select the first sheet
wks = sh[0]
print(f"-->Updating selected cells... ")
#update the first sheet with df, starting at cell A11.
wks.set_dataframe(df, 'J14')
Again, I am close to my final product of automating my sheets using this script/library/lambda that I can taste it :). If the absolute best workaround is S3 please be gentle I am a first year analyst trying to get my feet wet. Superior is telling me it would take a while to hook up a connection to S3 so thats also a reason to avoid. Thanks!
Fixed. Simply added the .json creds to the deployment package. I had ran into an issue with pandas so I have a blend of layers and a deployment package with my .py script (and, again, with secret.json). Thanks!