I am trying to create an "upload to OneDrive" button on my website, knowing that I would just like this to open a user authentication window (so that the user can upload the file created on my website directly on its own Onedrive cloud.
I have already done the same thing with google drive but so simply that I do not understand why I can not find a solution for my problem with "OneDrive" (and dropbox also for that matter ..)
my language is : nodeJs/python/Js
Create a database to store the files. Try using mondodb (a node package) to create a database to store uploaded files.
Related
I am working on a side project where user's can upload images of receipts to their Google Drive, and the application will access the text from the images with OCR. I have been tasked to save the information such as file name, file id, file type, file url, etc, in a database so we can access the uploaded picture to use OCR on it. Also, the auth token for a user should be saved in the db, according to some general user information.
How do I go about getting that information? I was able to have the pictures uploaded successfully after authenticating with Google and whatnot, but I have no idea where to find the file id/file url. I have access to file name, file type, and file path (except from what I can tell, it's not where the actual photo is being held in the user's drive).
Does anyone have any ideas on how I can complete my task? I can link my github with the current code if that's allowed. Thank you.
So I am trying to figure out how to download an array of images to a users computer. I have been storing everything through calling my server as I feel more secure using firebase on the server. So on click of a button on the client I can get a return of an array of the images in my firebase storage bucket.
Button click -> call server -> get a return of the array of urls from firebase
Now is there a way to download these to the users computer? Prompt them to choose a file path or download them directly?
I know I can do a single download auto by this:
var a = $("<a>").attr("href", url).attr("download", "img.png").appendTo("body");
a[0].click();
a.remove();
I have tested a single url download that auto downloads on the button click, but I dont feel like I should have to loop through the array one at a time to download all nor do I know if this would work. I would assume it would since a single url works.
Is there a better way?
There is no way to download multiple files in one request from Firebase Storage. If you want to allow downloading of multiple files, you'll have to store them in a single (say zip) file and use the approach you already do today for downloading that file.
Alternatively you can use the Google Cloud Storage API to download a bunch of files. See this answer for more on that, but be aware the the Google Cloud Storage API is meant for use on an app server and not directly in your web page.
we are using google picker to fetch the file in our application. Previously it was one time thing, but we noticed that because we don't allow data manipulation on our app, user need to do the changes on the drive and again do the upload/fetch process. We want to simplify the workflow and allow user to do one click refresh/resync of the file (spreadsheet). In order to do that I am thinking to save the file_id on my app, though I'll still need to get oAuthToken to build the service and fetch the file. Is that right approach or please suggest if any other mechanism I can follow.
The current google-picker workflow is using js on the client side which provides oAuthToken, fileId, Name of the file. on the server side I use python to build the service and fetch the actual file.
In my opinion, your process i.e, saving the file ID and access/refresh tokens and fetch the file when you need is the correct way to go.
I have built an application where we manage our files using google drive. In short my process was:
User consent and Auth.
Use picker to upload files and upon successful operation save necessary file data.
Fetch the file (with access token or use refresh token to get new access token) whenever we need to view it to user
I hope this helps.
Cheers
I have a web application that currently stores documents for users on my site in my Amazon S3 bucket. I am looking to enable users to open and edit their documents from my S3 bucket using Google Docs. How would I do that?
I don't think you will be able to open your documents hosted on S3 from Google Docs, unless you get your users to save them first.
I have a crazy idea that could work:
Share the documents with your users (either by making them public or just sharing the link).
Use Google Drive to sync those documents with some server.
Periodically upload the documents to s3.
My web app allows users to record their geolocation data. I need to somehow get that geolocation data into a file so I can put it in s3 storage. I have no idea how to go about this, but the controller already has file uploads to s3 set up using paperclip.
Is there some way to generate a file with javascript and then attach that file when the user clicks save? The other option I was thinking is that I could add a bunch of strings to the body using jQuery .data() method, but then I don't know how to attach a string as a file in my rails 3 form.
Any help is appreciated.
maybe you should try out Amazon's simple db instead of s3 for this? It would be more appropriate than creating files to store data in s3.
Amazon recently released a ruby sdk for their web services, SDB included:
https://github.com/amazonwebservices/aws-sdk-for-ruby
Edit: Or better yet, forget using SDB directly. I had forgotten that that SDK includes an implementation of ActiveModel called AWS::Record. Should make this trivial.
I'm assuming you're on Heroku or something and don't have a method of data persistence?
edit: looking quickly at paperclip's assign method, there's a chance this would work.
yourmodel.your_paperclip_attachment = StringIO.new(params[:your_posted_geolocation_data])
Paperclip appears to handle creating a tempfile from the stream.