How to forward AngularJS $http response to the browser? - javascript

I have a backend with header-based authentication and I want to download a file from it. With cookie-based authentication it is simple: you render a link to a file into <a href...> and let browser to handle the rest.
I have this code:
$http.get(url).then(function (response) {
// do something
}, function (response) {
alert("ERROR");
});
I have injection of HTTP headers, so in this case $http.get will populate them for us. What I want to do is to feed this response to $window, so from user perspective it will looks like usual file download.
Is there any way of doing this? Any other options are welcomed.

I assume what you want is to push the file to the client without them needing to click anything else on the page. You can cause this to occur without needing to mess with $window at all.
Here is an example of how I pushed a zip file created in the browser that contained multiple selected regions of a canvas:
var downloader = angular.element('<a>download</a>');
downloader.attr('href', "data:application/zip;base64," + content);
downloader.attr('download', 'my.zip')
var ev = $document[0].createEvent("MouseEvent");
ev.initMouseEvent(
"click",
true /* bubble */, true /* cancelable */,
window, null,
0, 0, 0, 0, /* coordinates */
false, false, false, false, /* modifier keys */
0 /*left*/, null
);
downloader[0].dispatchEvent(ev);

Related

Cordova FileTransfer sends file but nothing received

I'm sending a video recorded via Cordova's MediaCapture plugin to a remote server via the FileTransfer plugin, but nothing - neither the file, nor any data whatsoever - is arriving at the server end. The server receives the request, but it seems to be empty.
According to Cordova, everything goes fine. Here's the readout from the success callback:
And here's my JS: (mediaFiles[0] is the captured video file)
var options = new FileUploadOptions();
options.fileName = 'foo.bar';
options.mimeType = mediaFiles[0].type;
options.params = {
mime: mediaFiles[0].type
};
var ft = new FileTransfer();
ft.upload(
mediaFiles[0].fullPath,
encodeURI("http://xxxxxx.com/receive-video.php"),
function (r) {
console.log(r);
alert('sent file!');
},
function (error) {
alert('error');
console.log(error);
},
options,
true
);
(Note the last param, trustAllHosts, is set to true since my test server is self-signed.)
Cordova clearly thinks it's sent data, but my PHP script disagrees. Here's my PHP:
file_put_contents(
'readout.txt',
"Payload\n----------\n".
file_get_contents('php://input').
"\n\nRequest\n----------\n".
print_r($_REQUEST, 1).
"\n\nFiles\n----------\n".
print_r($_FILES, 1).
"\n\nPost\n----------\n".
print_r($_POST, 1)
);
As you can see, I'm looking pretty much everywhere. All these result in empty readouts, however, in readout.txt.
What am I doing wrong?
It turned out the chunkedMode param (in options) was the culprit.
In case this helps anyone else, disable this (it's true by default) and all should be fine.
options.chunkedMode = false;
Not sure how to explain the behaviour of the empty request with it turned on, though. The param exists to send the file in chunks, to allow for progress feedback of some sort.

Protractor e2e test case for downloading pdf file

Can anyone tell me how to write test case for a link to download pdf file using jasmine framework ?
Thanks in advance.
I can currently set download path location
Chrome
capabilities: {
'browserName': 'chrome',
'platform': 'ANY',
'version': 'ANY',
'chromeOptions': {
// Get rid of --ignore-certificate yellow warning
args: ['--no-sandbox', '--test-type=browser'],
// Set download path and avoid prompting for download even though
// this is already the default on Chrome but for completeness
prefs: {
'download': {
'prompt_for_download': false,
'default_directory': '/e2e/downloads/',
}
}
}
}
For remote testing you would need a more complex infrastructure like setting up a Samba share or network shared directory destination.
Firefox
var FirefoxProfile = require('firefox-profile');
var q = require('q');
[...]
getMultiCapabilities: getFirefoxProfile,
framework: 'jasmine2',
[...]
function getFirefoxProfile() {
"use strict";
var deferred = q.defer();
var firefoxProfile = new FirefoxProfile();
firefoxProfile.setPreference("browser.download.folderList", 2);
firefoxProfile.setPreference("browser.download.manager.showWhenStarting", false);
firefoxProfile.setPreference("browser.download.dir", '/tmp');
firefoxProfile.setPreference("browser.helperApps.neverAsk.saveToDisk", "application/vnd.openxmlformats-officedocument.wordprocessingml.document");
firefoxProfile.encoded(function(encodedProfile) {
var multiCapabilities = [{
browserName: 'firefox',
firefox_profile : encodedProfile
}];
deferred.resolve(multiCapabilities);
});
return deferred.promise;
}
Finally and maybe obvious, to trigger the download you click on the download link as you know, e.g.
$('a.some-download-link').click();
I needed to check the contents of the downloaded file (a CSV export in my case) against an expected result, and found the following to work:
var filename = '/tmp/export.csv';
var fs = require('fs');
if (fs.existsSync(filename)) {
// Make sure the browser doesn't have to rename the download.
fs.unlinkSync(filename);
}
$('a.download').click();
browser.driver.wait(function() {
// Wait until the file has been downloaded.
// We need to wait thus as otherwise protractor has a nasty habit of
// trying to do any following tests while the file is still being
// downloaded and hasn't been moved to its final location.
return fs.existsSync(filename);
}, 30000).then(function() {
// Do whatever checks you need here. This is a simple comparison;
// for a larger file you might want to do calculate the file's MD5
// hash and see if it matches what you expect.
expect(fs.readFileSync(filename, { encoding: 'utf8' })).toEqual(
"A,B,C\r\n"
);
});
I found Leo's configuration suggestion helpful for allowing the download to be saved somewhere accessible.
The 30000ms timeout is the default, so could be omitted, but I'm leaving it in as a reminder in case someone would like to change it.
it could be the test for checking href attribute like so:
var link = element(by.css("a.pdf"));
expect(link.getAttribute('href')).toEqual('someExactUrl');
The solutions above would not work for remote browser testing, e.g. via BrowserStack. An alternative solution, just for Chrome, could look like this:
if ((await browser.getCapabilities()).get('browserName') === 'chrome') {
await browser.driver.get('chrome://downloads/');
const items =
await browser.executeScript('return downloads.Manager.get().items_') as any[];
expect(items.length).toBe(1);
expect(items[0].file_name).toBe('some.pdf');
}
One thing I've done in the past is to use an HTTP HEAD command. Basically, it's the same as a 'GET', but it only retrieves the headers.
Unfortunately, the web server needs to support 'HEAD' explicitly. If it does, you can actually try the URL and then check for 'application/pdf' in the Content-Type, without having to actually download the file.
If the server isn't set up to support HEAD, you can probably just check the link text like was suggested above.

Dropzone.js retry if ajax failed

I'm using dropzone.js to upload certain files to my server. I have the problem that sometimes the server isn't able to keep up with the connections and refuses some uploads so they will fail and get marked red with an x. I would like to automaticaly retry after a certain amount of time or at least give the user the ability to restart it manually.
Is there an implemented feature in dropzone.js, an easy enough way to implement it for myself or is there a better tool to do those kinda uploads via drag/drop, preview, ajax, etc...?
My solution is without changing the Dropzone library and only 4 lines long. I try the file upload two times because the first failed request sets a new cookie based CSRF token:
var isFirstTry = true;
myDropzone = new Dropzone(document.body, {
init: function() {
this.on("sending", function(file, xhr, formData) {
xhr.setRequestHeader("X-CSRF", Cookies.get('CSRF'));
});
},
error: function(file, errorMessage, xhr){
if (errorMessage && errorMessage.status === 405 && file && isFirstTry) {
isFirstTry = false;
//remove item from preview
this.removeFile(file)
//duplicate File objet
new File([file], file.name, { type: file.type });
this.uploadFile(file);
}
},
// other configs
});
After dropzone error event, dropzone fires complete event no matter what is the result. On complete dropzone set the status element to be complete. This hide the progress bar. To prevent this behavior, copy the File object. This prevent complete hook to handle the new preview element.
One small modification to dropzone.js is required to make things look pretty but otherwise its just a directive.
My dropzone now retries (infinitely, but I'll fix that later) until it succeeds. A little more work is required to reset the progress bars but this should be enough to get you somewhere (if you still care about this).
The edit to dropzone.js is (in the beautified version):
success: function(file) {
file.previewElement.classList.remove("dz-error");
return file.previewElement.classList.add("dz-success");
}
Where I've added the remove line. This changes Xs to ticks when a file successfully uploads.
The angular directive follows:
.directive('dropZone', function($rootScope) {
return function ($scope, element, attr) {
var myDropZone = element.dropzone({
url: "api/ImageUpload",
maxFilesize: 100,
paramName: "uploadfile",
maxThumbnailFilesize: 5,
autoProcessQueue: false,
parallelUploads: 99999,
uploadMultiple: false,
// this is my identifier so my backend can index the images together
params: {identifier: $scope.identifier},
// I seem to need to do this when a file is added, otherwise it doesn't update
init: function(){this.on("addedfile", function(file){$rootScope.$digest();})}
});
// grabbing the dropzone object and putting it somewhere the controller can reach it
$scope.dropZone = myDropZone.context.dropzone;
// what we use to work out if we're _really_ complete
$scope.errors = [];
// here is our retry mechanism
myDropZone.context.dropzone.addEventListener("error", function(file,errorMessage,xhr)
{
// log our failure so we don't accidentally complete
$scope.errors.push(file.name);
// retry!
myDropZone.context.dropzone.uploadFile(file);
});
myDropZone.context.dropzone.addEventListener("success", function(file,errorMessage,xhr)
{
// remove from the error list once "success" (_not_ "complete")
$scope.errors.splice($scope.errors.indexOf(file.name), 1);
});
// this gets called multiple times because of our "retry"
myDropZone.context.dropzone.addEventListener("queuecomplete", function()
{
// if we're here AND have no errors we're done
if($scope.errors.length == 0)
{
// this is my callback to the controller to state we're all done
$scope.uploadComplete();
}
});
};
})
not sure if all that myDropZone.context.dropZone stuff is necessary, I kinda suck at javascript and spend a lot of my time console.logging() objects and examining them in the debugger. This is where I found the dropzone component, perhaps there is an easier way?
My situation was a little bit different then Quibblesome, but my solution was based on their answer... so thanks Quibblesome!!!!
On my situation the ajax wasn't failing so dropzone.addEventListener("error", function(file,errorMessage,xhr) was never getting triggered. So I changed Quibblsomes solution a bit to work even if dropdzone triggered success.
var fileObj;
clientDetailsDropZone = new Dropzone($("#clientDetailsDropZoneArea").get(0), {
init: function()
{
this.on("success", function(e,response)
{
//IF THE USERS authtoken is expired..... retry the upload automatically
if(!authToken.isValid)
{
//Get a new Token and then autmoatically re upload -- getFileAuthToken is another ajax call that authorizes the user
getFileAuthToken(function f(e){
this.uploadFile(fileObj);
});
}
else
{ //They had a good token and the upload worked
alert("yay your upload worked!");
}
});
this.on("addedfile", function(file) {
fileObj = file;
});
}
});

Is there a way to not send cookies when making an XMLHttpRequest on the same origin?

I'm working on an extension that parses the gmail rss feed for users. I allow the users to specify username/passwords if they don't want to stay signed-in. But this breaks for multiple sign-in if the user is signed-in and the username/password provided is for a different account. So I want to avoid sending any cookies but still be able to send the username/password in the send() call.
As of Chrome 42, the fetch API allows Chrome extensions (and web applications in general) to perform cookie-less requests. HTML5 Rocks offers an introductory tutorial on using the fetch API.
Advanced documentation on fetch is quite sparse at the moment, but the API interface from the specification is a great starting point. The fetch algorithm described below the interface shows that requests generated by fetch have no credentials by default!
fetch('http://example.com/').then(function(response) {
return response.text(); // <-- Promise<String>
}).then(function(responseText) {
alert('Response body without cookies:\n' + responseText);
}).catch(function(error) {
alert('Unexpected error: ' + error);
});
If you want truly anonymous requests, you could also disable the cache:
fetch('http://example.com/', {
// credentials: 'omit', // this is the default value
cache: 'no-store',
}).then(function(response) {
// TODO: Handle the response.
// https://fetch.spec.whatwg.org/#response-class
// https://fetch.spec.whatwg.org/#body
});
You can do that by using the chrome.cookies module. The idea is to get the current cookies, save them, remove them from the browser's cookie store, send your request, and finally restore them:
var cookies_temp = []; // where you put the cookies first
var my_cookie_store = []; // the cookies will be there during the request
var details = {/*your code*/}; // the first parameter for chrome.cookies.getAll()
var start_kidnapping = function(cookies) {
cookies_temp = cookies.slice();
kidnap_cookie();
};
var kidnap_cookie = function() {
// This recursive function will store the cookies from cookies_temp to
// my_cookie_store and then remove them from the browser's cookie store.
if (cookies_temp.length == 0) { // when no more cookies, end recursion
send_request();
};
else {
var cookie = cookies_temp.pop();
// We store url as a property since it is useful later.
// You may want to change the scheme.
cookie.url = "http://" + cookie.domain + cookie.path;
my_cookie_store.push(cookie); // save it
chrome.cookies.remove({url: cookie.url, name: cookie.name}, kidnap_cookie);
};
};
var send_request = function() {
// Send your request here. It can be asynchronous.
for (var i = 0, i < my_cookie_store.length; i++){
delete cookie.hostOnly; // these 2 properties are not part of the
delete cookie.session; // object required by chrome.cookies.set()
// note that at this point, cookie is no longer a Cookie object
chrome.cookies.set(my_cookie_store[i]); // restore cookie
};
my_cookie_store = []; // empty it for new adventures
};
chrome.cookies.getAll(details, start_kidnapping); // start
Alternatively, a simpler solution is to open an incognito window which will send the request, using the chrome.windows module, but this will prevent you from communicating with the rest of your extension. Note that you may have to change the incognito property of your manifest to split:
var incognito_window = {
"url": "incognito.html",
"focused": false, // do not bother user
"incognito": true
}
chrome.windows.create(incognito_window);

Google Chrome: XMLHttpRequest.send() not working while doing POST

I'm working on an application that allows the user to send a file using a form (a POST request), and that executes a series of GET requests while that file is being uploaded to gather information about the state of the upload.
It works fine in IE and Firefox, but not so much in Chrome and Safari.
The problem is that even though send() is called on the XMLHttpRequest object, nothing is being requested as can be seen in Fiddler.
To be more specific, an event handler is placed on the "submit" event of the form, that places a timeout function call on the window:
window.setTimeout(startPolling, 10);
and in this function "startPolling" sequence is started that keeps firing GET requests to receive status updates from a web service that returns text/json that can be used to update the UI.
Is this a limitation (perhaps security-wise?) on WebKit based browsers? Is this a Chrome bug? (I'm seeing the same behaviour in Safari though).
I am having the exact same problem. At the moment i use an iframe, which is targeted in the form. That allows the xhr requests to be executed while posting. While that does work, it doesn't degrade gracefully if someone disables javascript.(I couldn't load the next page outside the iframe without js) So if someone has a nicer solution, i would be grateful to hear it.
Here the jQuery script for reference:
$(function() {
$('form[enctype=multipart/form-data]').submit(function(){
// Prevent multiple submits
if ($.data(this, 'submitted')) return false;
var freq = 500; // freqency of update in ms
var progress_url = '{% url checker_progress %}'; // ajax view serving progress info
$("#progressbar").progressbar({
value: 0
});
$("#progress").overlay({
top: 272,
api: true,
closeOnClick: false,
closeOnEsc: false,
expose: {
color: '#333',
loadSpeed: 1000,
opacity: 0.9
},
}).load();
// Update progress bar
function update_progress_info() {
$.getJSON(progress_url, {}, function(data, status){
if (data) {
var progresse = parseInt(data.progress);
$('#progressbar div').animate({width: progresse + '%' },{ queue:'false', duration:500, easing:"swing" });
}
window.setTimeout(update_progress_info, freq);
});
};
window.setTimeout(update_progress_info, freq);
$.data(this, 'submitted', true); // mark form as submitted.
});
});

Categories