I'm trying to download huge CSV file from server which is being generated on the fly.
Im returning ResponseEntity<StreamingResponseBody> in async so as soon as i have part of my data i'm returning it.
this is my controller code:
StreamingResponseBody streamingResponseBody = out -> {
csvService.exportToCsvBySessionId(applicationId, sessionIdsInRange, out, tags);
}
return ResponseEntity.ok()
.headers(csvService.getHeaders(CSV_FILE_NAME))
.body(streamingResponseBody);
in the header i'm adding
produces: text\csv;
Content-Disposition: attachment; filename=%s.csv;
On the client side im using aurelia framework and sending the request using HttpClient (fetch)
public getFraudAlertsCsv() {
this.serverProxy.fetch(`/sessions/fraud/csv)
.then(response => {
logger.debug('waiting for response');
return response.blob());
.then((blob: Blob) => this.donwnloadCsv(blob, `Fraud_Alerts_${new Date()}.csv`))
.catch( (err)=> {
this.logger.error("Failed to get appSessionId sessions csv file", err);
});
}
even though i can see in the network analysis that my request is starting to get response (it size increases) there is no popup window asking to download the file, and the log doesn't print "waiting for response".
instead im getting the whole file being download what the entire response arrived (when server close the stream).
I want to show the progress of the download, how can i do it?
I think fetch doesn't support progress API yet, so you may want to use traditional XHR and use onprogress or progress event:
xhr.onprogress = function updateProgress(oEvent) {
if (oEvent.lengthComputable) {
var percentComplete = oEvent.loaded / oEvent.total * 100;
// ...
} else {
// Unable to compute progress information since the total size is unknown
}
}
Note: code example taken from MDN page https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest#Monitoring_progress
Related
Here I m implementing the image upload to database using Spring Boot and React. I have encountered Error parsing HTTP request header error.
Error:
Error parsing HTTP request header
java.lang.IllegalArgumentException: Invalid character found in method
name
[0x160x030x010x020x000x010x000x010xfc0x030x030x1f0xc4T0x880xe1T0x070x00[Ua0xf40x8b0x0a0x900x8c<}0xe20xf70x080xa90xdaO0xb3U0xc7g0xaf0xfb30xa8].
HTTP method names must be tokens
React
doit=(e)=>{
e.preventDefault();
let imagefile = document.querySelector('#input');
let formData = new FormData();
formData.append("image", imagefile.files[0]);
return axios.post('https://localhost:8080/fileupload', "image",formData, {
headers: {
'Content-Type': 'multipart/form-data'
}
});
}
<form encType='multipart/form-data'>
<input id="input" type='file' accept="image/*"/>
<button onClick={this.doit} type='submit'>Submit</button>
</form>
Spring Boot
#PostMapping("/fileupload",consumes = MediaType.MULTIPART_FORM_DATA_VALUE))
public String fileUpload(#RequestParam("name") String name, #RequestParam MultipartFile file) {
try {
byte[] image = file.getBytes();
MyModel model = new MyModel(name, image);
int saveImage = myService.saveImage(model);
if (saveImage == 1) {
return "success";
} else {
return "error";
}
} catch (Exception e) {
return "error";
}
}
So actually this problem has to do with just the way you are sending over the file. Images and videos are not the same as json data. They are long strings of numbers or encoded text that create visual masterpieces on the screen. I don't understand all of it yet basically what you need to do is get the file location. like if use a file picker api and it lets you choose something to upload, and you can store that info in a variable that is what 'file' is
then use javascript's built in fetch method.
const genFunction = async (file) =>{
const response = await fetch(file)
const blob = await response.blob();
// then you need to send that blob to the server
try {
const result = await requestToServer(
//configuration stuff
contentType:'image/jpg' // specify that you are sending a image
// or whatever file type somehow
)
console.log(request) } catch (e) { ... } }
lol they did not teach me how to do this at coding bootcamp or whatever, I didnt realize there are more types and things to do for so long. I figured out how to make it work with Amplify storage, and react native if anyone needs that just ask a question and i'll provide a solution for that.
this guy talks allot about best method to convert stuff to the format you actually need it to be. and the solution that i provided is the slowest, but easy to understand.
http://eugeneware.com/software-development/converting-base64-datauri-strings-into-blobs-or-typed-array
Also you should checkout out:
https://docs.mongodb.com/manual/core/gridfs/
Store images in a MongoDB database //basically the same question
https://javabeat.net/mongodb-gridfs-tutorial/
I have made this mistake before I didn't know how to solve it, my help told me to start storing images on AWS s3 which is not what I was asking. I hope you figure it out.
I have a simple use case as below -
User triggers a REST API call to process file upload
The file gets uploaded on cloud at back end
The API returns a response with URL for downloading the file like - http://somedomain/fileName.zip
After file gets downloaded at Client side , there is another API call to
update DOWNLOAD count in database for the downloaded file
This is being achieved using Angular code at client side .
The pseudo code for this is as below :
//Download Service Call to Download File . It returns Obseravable
downloadService.downloadFile.subscribe(
(data: downloadData) => processDataDownload(data);
(err) => this.message = "Error Downloading File ";
);
//method processing file download and updating download count
private processDataDownload(data : Response){
this.downloadFile(data);
this.updateDownloadCount(data);
}
//download a file using URL
private downloadFile(data : Response) : void {
//standard java script code to get file download using URL click
var a = document.createElement("a");
a.href = data.downloadUrl;
fileName = data.downloadUrl.split("/").pop();
a.download = fileName;
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
a.remove();
}
//update download count
private updateDownloadCount(data : Response ){
//another http call to update download count for files
}
Now , my question is :
1)Is there any way we can make updateDownloadCount method gets called ONLY after the file gets DOWNLOADED successfully ?
2)Meaning , the java script code in method - downloadFile , triggers a file download using programmatic URL click
3)Can we wait for this download happen and then call updateDownloadCount ?
4)Can we wait for DOWNLOAD event completion and then trigger database update ?
5)Is there any way we can achieve this ?
I'm assuming downloadService.downloadFile isn't actually downloading the file, so much as it's downloading the file details, correct? When you create a download url as you are doing in your sample, you are essentially telling the browser to handle the download process. If you want your application to handle it, you'll need to take a slightly different approach.
You basically want to use Angular's HttpClient to grab the file blob from the server. You can use it to track progress. When it's completed, you can send the confirmation to the API and render the internal download link.
Something like:
downloadFile(file) {
return this.http.get(
this.baseUrl + 'file.url',
{
responseType: 'blob',
reportProgress: true,
observe: 'events',
}
);
}
In your component you can subscribe to this download and check
result.type against HttpEventType.DownloadProgress for progress or HttpEventType.Response for download completion.
this.service.downloadFile(myfile).subscribe(result => {
if (result.type === HttpEventType.DownloadProgress) {
...calc percent
}
if (result.type === HttpEventType.Response) {
... this is where you create the blob download link and call your api
}
});
I am using a node sever to send a table from a sqlite db to the browser. This table contains filename and path of a pdf file that I want to render on the browser. Until now I was using hard coded paths for the the pdf file and rendering. But now i have setup a get route and a controller in node such that whenever '/content' is hit in browser , the server queries the database and and sends the data to the client. To the send the data I am using
res.render('content/index',{data:queryData});
Now, how do I access this data using client side javascript so that I can pass the path of the pdf file to the function that renders the pdf? I have done research and the nearest answer I got was using XMLHttpRequest. I tried this method
var xhr = new XMLHttpRequest();
const path = "http://localhost:3000/content";
xhr.onreadystatechange = function () {
if (xhr.readyState == 4 && xhr.status == 200)
{
var myResponseText = xhr.responseText;
console.log(myResponseText);
}
};
xhr.open('get', path, true);
xhr.send();
When I do this I get the entire html code for the view. Not the data I expected. How do I solve this issue. I have done some more reading while writing this post and I suppose. I have set a header somewhere? But the documentation says
app.render(view, [locals], callback)
which means res.render can take local variables, shouldn't be setting the headers?
You should return json instead of render template:
app.get('content/index', (req, res) => {
res.json({data: queryData});
});
I am using pdf.js
PDF.js needs the PDF file, e.g.:
pdfjsLib.getDocument('helloworld.pdf')
I'm assuming your queryData goes something like this:
{ filename: 'file.pdf', path: './path/to/file.pdf' }
I'm not sure what's in your content/index or what path this is on, but you obviously need to find a way to make your PDF file ('./path/to/file.pdf') available (as a download). See Express's built-in static server or res.download() to do that.
Once you have the PDF file available as a download, plug that path into PDF.js's .getDocument('/content/file.pdf') and do the rest to render the PDF onto the canvas or whatever.
Hope that helps.
I have a web api that reads a file from azure and downloads it into a byte array. The client receives this byte array and downloads it as pdf. This does not work well with large files.
I am not able to figure out how can I send the bytes in chunks from web api to client.
Below is the web api code which just returns the byte array to client:
CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
blockBlob.FetchAttributes();
byte[] data = new byte[blockBlob.Properties.Length];
blockBlob.DownloadToByteArray(data, 0);
return report;
Client side code gets the data when ajax request completes, creates a hyperlink and set its download attribute which downloads the file:
var a = document.createElement("a");
a.href = 'data:application/pdf;base64,' + data.$value;;
a.setAttribute("download", filename);
The error occurred for a file of 1.86 MB.
The browser displays the message:
Something went wrong while displaying the web page.To continue, reload the webpage.
The issue is most likely your server running out of memory on these large files. Don't load the entire file into a variable only to then send it out as the response. This causes a double download, your server has to download it from azure storage and keep it in memory, then your client has to download it from the server. You can do a stream to stream copy instead so memory is not chewed up. Here is an example from your WebApi Controller.
public async Task<HttpResponseMessage> GetPdf()
{
//normally us a using statement for streams, but if you use one here, the stream will be closed before your client downloads it.
Stream stream;
try
{
//container setup earlier in code
var blockBlob = container.GetBlockBlobReference(fileName);
stream = await blockBlob.OpenReadAsync();
//Set your response as the stream content from Azure Storage
response.Content = new StreamContent(stream);
response.Content.Headers.ContentLength = stream.Length;
//This could change based on your file type
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
}
catch (HttpException ex)
{
//A network error between your server and Azure storage
return this.Request.CreateErrorResponse((HttpStatusCode)ex.GetHttpCode(), ex.Message);
}
catch (StorageException ex)
{
//An Azure storage exception
return this.Request.CreateErrorResponse((HttpStatusCode)ex.RequestInformation.HttpStatusCode, "Error getting the requested file.");
}
catch (Exception ex)
{
//catch all exception...log this, but don't bleed the exception to the client
return this.Request.CreateErrorResponse(HttpStatusCode.BadRequest, "Bad Request");
}
finally
{
stream = null;
}
}
I have used (almost exactly) this code and have been able to download files well over 1GB in size.
I'm making a JavaScript script that is going to essentially save an old game development sandbox website before the owners scrap it (and lose all of the games). I've created a script that downloads each game via AJAX, and would like to somehow upload it straight away, also using AJAX. How do I upload the downloaded file (that's stored in responseText, presumably) to a PHP page on another domain (that has cross origin headers enabled)?
I assume there must be a way of uploading the data from the first AJAX request, without transferring the responseText to another AJAX request (used to upload the file)? I've tried transferring the data, but as expected, it causes huge lag (and can crash the browser), as the files can be quite large.
Is there a way that an AJAX request can somehow upload individual packets as soon as they're recieved?
Thanks,
Dan.
You could use Firefox' moz-chunked-text and moz-chunked-arraybuffer response types. On the JavaScript side you can do something like this:
function downloadUpload() {
var downloadUrl = "server.com/largeFile.ext";
var uploadUrl = "receiver.net/upload.php";
var dataOffset = 0;
xhrDownload = new XMLHttpRequest();
xhrDownload.open("GET", downloadUrl, true);
xhrDownload.responseType = "moz-chunked-text"; // <- only works in Firefox
xhrDownload.onprogress = uploadData;
xhrDownload.send();
function uploadData() {
var data = {
file: downloadUrl.substring(downloadUrl.lastIndexOf('/') + 1),
offset: dataOffset,
chunk: xhrDownload.responseText
};
xhrUpload = new XMLHttpRequest();
xhrUpload.open("POST", uploadUrl, true);
xhrUpload.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
xhrUpload.send(JSON.stringify(data));
dataOffset += xhrDownload.responseText.length;
};
}
On the PHP side you need something like this:
$in = fopen("php://input", "r");
$postContent = stream_get_contents($in);
fclose($in);
$o = json_decode($postContent);
file_put_contents($o->file . '-' . $o->offset . '.txt', $o->chunk);
These snippets will just give you the basic idea, you'll need to optimize the code yourself.