I’m currently working on a project that uses GCP storage to store some recorded audio files. The way it is designed is a user requests the file to the NodeJS server which in turn pipes the content of the file from GCP to the client. However, I’m seeing a big spike in latency when multiple users (<10) download files at the same time. My thought is that the NodeJS server should just return to the client the URI of the GCP file and the client should download it directly from GCP using an Authentication token. Am I correct, or should indeed NodeJS act as a proxy between GCP and the client to pipe the data?
Related
i have written a game in javascript with the p5.js library. Now i want to host the game on a server to conduct a survey on a service like amazon turk. Ideally the clients recieve a URL to the game and play it while in-game actions are tracked and stored in node.js or on the server and exported as a .csv file once they are done playing. After they finish the game the csv. file should be sent automatically to a location that i can then access. I have zero experience in server hosting or similar topics.
So a couple questions arise:
Is a hosting service like Heroku suitable for hosting the game?
Do i need to use node.js to make this happen?
Which of those two would extract the data and store it to a csv? And where is the file stored?
How do i get or access the csv. after?
Any alternative takes to solve the problem?
Thanks alot in advance!
github repository: https://github.com/luuuucaaa/schaeffers-charade
game on github pages: https://luuuucaaa.github.io/schaeffers-charade/
If I were you, I would do it like below:
Host
Since your project is basically a html & JavaScript static contents,
AWS S3's static hosting would be sufficient (Also, the current git hub pages is another option if you just want to host it).
Hosting on node.js environment is also available using webpack serving, but it requires additional works. (but if you require other npm packages to generate .csv file, you need webpack anyway to bundle js file and attach it to html)
Data Storing
Two ways are considerable,
the first is to store it on the filesystem. Generate .csv via JS script within your app, and save it where the app is hosted (if you go with s3, you can access it afterwards, but I'm not sure if it can write objects by script)
The second is to post the data to another API endpoint. (for example building an API Gateway on AWS that triggers Lambda, which stores it on S3)
It's merely an example and I don't know exactly what you want to achieve, but take it into considerations. Good luck. Cool game BTW.
I am having trouble uploading a big movie on to the Azure app service which I created. I get request timeout after 4-5 mins while uploading the movie (greater than 150MB). For the frontend, I am using VueJS and send multiple files by doing promisify all settled function. Don't have any issues while using it locally. For backend, I am using Nodejs(fastify) with a multer package and I am using an in-memory storage option. Once I receive the file basically I upload it to Azure blob storage.
Do I have to send movie data in chunks from the frontend to backend? how to achieve it when I have multiple files.
Can we use socket io?
I tried using socket io. however, my browser freezes if I send a big file and I am totally new to sockets.
I am not sure how can I fix this issue. It would be great if someone can guide me and show me an example.
Looking forward to hearing from you guys
thanks,
meet
Problems for uploading files to server
Check the timeout in your axios request (front end) - because you have to wait until the all files uploaded to server (https://github.com/axios/axios#creating-an-instance).
Check the domain hosting configuration (if you are hosting your backend service in nginx - check the upload limit (https://www.tecmint.com/limit-file-upload-size-in-nginx/))
I have a local ip that other computers can connect to my local server.
My problem is when I downloaded a file that file is downloaded to the local server. How can I download a files that I save on my local server to my current other computer.
Any module that is available will do.
Nodejs and javascript only.
Thanks in advance.
You need to have a client (the computer which is initiating the transfer) and a server (the computer you are transferring the file to).
(You can also do it the other way around, where the client initiates the transfer by asking the server for the file, and then gets the file in the response, but that isn't what you are asking).
A typical way to do this using Node.js would be to have the server understand HTTP and for the client to make an HTTP request.
This question explains how to make an HTTP post request with an attached file which covers the needs of the client.
This question explains how to use Express.js to receive the file. You can then save it as normal in Node.js.
I have implemented javascript code to upload files in multiple chunks to google cloud storage.
Below is flow that I execute the upload file:
1. user selects a file to upload using javascript client web app {request is from ASIA region}
2. javascript client app request to our app server implemented in NODEJS {hosted in google cloud's compute engine - US region} to allow the file upload {authorization}
2. Nodejs App server returns a signedurl to the client app
3. client app start uploading file to google storage in multiple chunks using that signed url
4. on upload successful client reports to app server
I am able to upload files in multiple chunks, but I have observed that upload is 2-3 times slower if I host nodejs app server in google cloud US region rather than hosting on same machine from where I am executing client app request
Please let me know if you have solution how to improve the upload performance.
There is some workaround mentioned in google cloud signed-url documentation :
Resumable uploads are pinned in the region they start in. For example,
if you create a resumable upload URL in the US and give it to a client
in Asia, the upload still goes through the US. Performing a resumable
upload in a region where it wasn't initiated can cause slow uploads.
To avoid this, you can have the initial POST request constructed and
signed by the server, but then give the signed URL to the client so
that the upload is initiated from their location. Once initiated, the
client can use the resulting session URI normally to make PUT requests
that do not need to be signed.
But with that reference:
I couldnt found any code sample for: once client receives the signed
url from server how the initial JSON API call can be constructed ??
what should be expected response in 1st call? and how to extract
session URI
how to use the session URI to upload further chunks?
You may be confusing two separate GCS features. GCS allows for resumable uploads to be authorized to third parties without credentials in a couple of ways.
First, and preferred, is signed URLs. Your sends a signed URL to a client that will allow that client to begin a resumable upload.
Second, and less preferred due to the region pinning you mention above, is having the server initiate a resumable upload itself and then passes the upload ID to the client.
It sounds like you want the first thing but are using the second.
Using signed URLs requires making use of the XML API, which handles resumable uploads in a similar way to the JSON API: https://cloud.google.com/storage/docs/xml-api/resumable-upload
You'll want to sign that very first POST call to create an upload and pass that URL to the user to invoke on their own.
I have a web page that usually suppose to work offline.(without internet connection).
Once a while it's need to connect to the web and grab some data to be used offline.
I'm searching for a way to store the data locally while it connected and still have an access to the data offline.
I checked local storage and the FileSystem-API but both are follows the Same Origin Policy.
Any suggestion will be appreciate
When I was creating offline application to sync with online version I had some JSON file with required information instead of LocalStorage.
Work flow:
User requests new files to be generated (a.k.a. sync with server) using some online interface.
Generate JSON file with needed data and save it along offline files.
User downloads new files and replaces it with old ones.
Offline JS reads JSON file and gets all information.
We were using some JAVA installer (launch4j to generate .jar files and IzPack to make installer)