I have WebApp(MVC4) with a file upload and I did a terrible mistake, I let the client upload files to my server(Virtual Machine Azure) directly.
In order to do it, I set (in the WebConfig) :
maxRequestLength="2097151"
executionTimeout="3600"
maxAllowedContentLength="4294967295"
Now I understand that its not the way to do it.
So what I want is to be able to upload the user files directly to my Azure Blob Storage without the files getting to my site.
I manage to upload files to my storage with C# butI don't want to send the files to the server-side and taking care of them there, because its mean that the files are first upload to the server then they move to the blob storage, and this is not good for me because I'm dealing with a very large files.
I need to transfer the files to the blob storage without going through the server.
How can I do it ? I didn't manage to find too many article addressing this issue, I just read about SAS and CORS that helping addressing the problem but without actual guidelines to follow.
You are right that CORS and SAS are the correct way to do this. I would recommend reading Create and Use a SAS with the Blob Service article for an introduction. Please also see our demo at Build 2014 for a sample application that lists and downloads blob in JavaScript using CORS and SAS.
Related
i have written a game in javascript with the p5.js library. Now i want to host the game on a server to conduct a survey on a service like amazon turk. Ideally the clients recieve a URL to the game and play it while in-game actions are tracked and stored in node.js or on the server and exported as a .csv file once they are done playing. After they finish the game the csv. file should be sent automatically to a location that i can then access. I have zero experience in server hosting or similar topics.
So a couple questions arise:
Is a hosting service like Heroku suitable for hosting the game?
Do i need to use node.js to make this happen?
Which of those two would extract the data and store it to a csv? And where is the file stored?
How do i get or access the csv. after?
Any alternative takes to solve the problem?
Thanks alot in advance!
github repository: https://github.com/luuuucaaa/schaeffers-charade
game on github pages: https://luuuucaaa.github.io/schaeffers-charade/
If I were you, I would do it like below:
Host
Since your project is basically a html & JavaScript static contents,
AWS S3's static hosting would be sufficient (Also, the current git hub pages is another option if you just want to host it).
Hosting on node.js environment is also available using webpack serving, but it requires additional works. (but if you require other npm packages to generate .csv file, you need webpack anyway to bundle js file and attach it to html)
Data Storing
Two ways are considerable,
the first is to store it on the filesystem. Generate .csv via JS script within your app, and save it where the app is hosted (if you go with s3, you can access it afterwards, but I'm not sure if it can write objects by script)
The second is to post the data to another API endpoint. (for example building an API Gateway on AWS that triggers Lambda, which stores it on S3)
It's merely an example and I don't know exactly what you want to achieve, but take it into considerations. Good luck. Cool game BTW.
I am trying to build a Image Upload System with Node and Azure Blob Storage. I have decided to use the Azure Blob Browser JS SDK to upload image . I have a dilemma in the above process .How do I tell the server of the blob name . I thought of the following approach but this has several problems:
Let just say I give the blob a uuid and send the same uuid to the server but the client JS can be changed and the real blob name and the one sent to the server can differ
It might be that my approach is completely trash. Please reply ,I am a newbie to web development
If I understand your question, I’m doing something similar in ASP.NET and SQL where I actually store the blob itself in Azure storage BUT in a SQL table it’s really just a row of meta-data, where each image has an internal numerical seed ID, a “user-defined” name (which does not have to be unique across the system), and the GUID for e blob name stored in Azure.
I have a server-side subfolder structure in my HTML5/JS site.
The subfolder structure contains various media types where each media file is wrapped in its own HTML file which contains metatags.
I want to list all metatags for all files but I do not want to have to browse for a file (i.e. no FileSystem API) and get it's metadata. I just want to scan through the subfolder and list all metadata in each file.
I'm not able to find any script to do this, everything I keep running into is asking for the FileSystem API and the requirement to browse for a file.
alternatively, if FileSystem API can do this, I'd use it as long as I don't have to go browsing for files to use it.
My server is a standard LAMP server and the files are all HTML files inside a site subfolder. This site currently has no DB and I'm hoping to not add one for this functionality.
Any help would be appreciated.
Maybe Node.js would be a good fit for you. Then you can write everything in Javascript. It is server side scripting, but for demonstration purposes the configuration is much easier than Apache.
If I understand correctly that you don't want to browse the file system on the server side, but on the client side you are willing to do anything with Javascript, then the following may also be an option.
Using LAMP, you can configure Apache so that it shows directory indexes (https://httpd.apache.org/docs/current/mod/mod_dir.html#directoryindex), which you can use to browse to the content you need on the client side.
In any case, you will have to hit the files on your server somehow, either directly through the file system on the server side or with an HTTP request from the client side.
I have a web application with a Javascript Frontend and a Java Backend.
I have a Use Case where the users can upload their profile pictures. The Use Case is simple: send the image from the user browser and store it in S3 with a maximum file size. Currently the Frontend sends the image data stream to the Backend, and this the stores the image in S3 using the AWS Java SDK.
The Backend is obliged to previously store this image in a file system in order to know the image file size (and avoid reading more bytes than a certain maximum), since it is required by S3 to send the PUT Object Request along with the file Content Length.
Is there any other way I can do this with AWS? Using another service? maybe Lambda? I don't like this method of having to previously store the file in a file system and then open again a stream to send it to S3.
Thank you very much in advance.
You might get the file size on the client side as mentioned here but consider browser support.
You shouldn't share your keys with the client side code. I believe Query String Authentication should be used in this scenario.
Assuming your maximum file size is less than your available memory, why can't you just read it in a byte[] or something similar where you can send it to S3 without writing it to disk. You also should be able to get the size that way.
Depends on the S3 java client you are using. If it allows you to read from ByteArrayInputStream then you should be able to do whatever.
Looks like you can use InputStream. See javadoc.
new PutObjectRequest(String bucketName,
String key,
InputStream input,
ObjectMetadata metadata)
I'm making an application in the express framework where the user uploads a zip file and views it on the website. I already have uploading a single html file and viewing it; however, I can't seem to figure out extracting a zip file online. I can currently store the zip file in the database, but when it's pulled from the database it seems to be impossible to unzip it in to the url and not on my disk. Where do you think I should start with trying to solve this problem?
I suggest the use of this module: https://github.com/cthackers/adm-zip - I have succeded in using it when user uploads file that must be unziped at server side. I think that docs for the native zlib API are missing or not yet supplied. Let me know if this helps
Not sure which zip format you're using, but one example of how it could work is with the zlib api: http://nodejs.org/api/zlib.html
If you use that library to create a stream you can write that stream to the response, in concept. The bigger question in my mind is what do you mean by "unzip to a url"? If the zip is an archive w/ multiple files, what do you expect a user to see at that URL?