I am building an app that pulls files from SharePoint 2013 or SharePoint 2010 for view in HTML. In C#, files are pulled out of SharePoint (multipage documents like Word, Excel, PDF, TIFF, etc), then are fed into various 3rd party software (DataLogics and Aspose) - which break the documents down into their individual pages, then streams the individual pages to the browser in PNG format.
So in HTML, we have an img element whose src is set to a specific URL in an ASHX service. The ASHX service grabs the file out of SharePoint and, based on query string params, returns the desired page as a Stream.
Here is how we shoot it back:
[WebService(Namespace = "url")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class FileTransfer : IHttpHandler, IReadOnlySessionState
{
public void ProcessRequest(HttpContext context)
var stream = GetStream(context.Request);
int chunkSize = 2097152; //2MB
byte[] chunk = new byte[chunkSize];
int bytesRead = 0;
do {
bytesRead = stream.Read(chunk, 0, chunkSize);
HttpContext.Current.Response.OutputStream.Write(chunk, 0, bytesRead);
}
while (bytesRead > 0);
}
This works perfectly 100% of the time in any browser when the file we are breaking down comes directly from SharePoint.
We also provide a feature where the user can upload a document. This is where the problem comes in. Uploaded documents are not saved in SharePoint. Instead their data is stored in SessionState until the user chooses to save. Files are uploaded to an ASMX service, then the browser requests their individual pages via the above ASHX.
Files are uploaded like this in an ASMX service:
[WebMethod(EnableSession = true)]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
Public object Upload()
{
var request = HttpContext.Current.Request;
if (request.Files.Count == 1)
{
var uniqueId = request["uniqueId"];
var file = request.Files[0];
using (var memoryStream = new MemoryStream())
{
file.InputStream.CopyTo(memoryStream);
docInfo = UploadItem(uniqueId, pageNum, memoryStream.ToArray());
}
}
}
UploadItem adds the uniqueId and byte[] to SessionState.
Files are sent from javascript like this (FileUpload being tied to the change event of an input of type=file):
this.FileUpload = function (files) {
var upload = new XMLHttpRequest();
upload.onreadystatechange = () => {
if (this._curUploadRequest.readyState == 4) {
// handle response
}
};
UpdateFormDigest((<any>window)._spPageContextInfo.webServerRelativeUrl,(<any>window)._spFormDigestRefreshInterval);
var data = new FormData();
data.append("uniqueId", uniqueId);
data.append("pageNum", pageNum);
data.append("data", files[0]);
upload.open('POST', "myurl");
upload.setRequestHeader("X-RequestDigest", $("#__REQUESTDIGEST").val());
upload.send(data);
};
Now we come to the actual bug.
Images are rendered using:
<img src="url to ASHX service" />
In FireFox and Chrome, page images from uploaded documents always show up just fine. But in IE (9, 10, or 11), it renders only the first portion of them, then shows broken image icons on the image placeholders. For these broken images, the NET tab of IE shows it received 0kb and the error event is hit. But if I put a breakpoint in the ASHX just before it returns the stream, it always has a size.
More interestingly, if you take the url that the src is pointed to, open a new window and paste it in, the image shows up just fine.
I even tried to load the images in javascript first like this:
var img = new Image();
img.onload = function(){
// use jquery to append image to page
};
img.src = "url to ASHX service";
In this scenario, Chrome and Firefox work fine as usual, but IE fails again. Except this way, the NET tab of IE shows it received the correct size kb in response. However, it still shows the broken image icon and won't render images to the screen after some unknown threshold. The first several images come back, but once one breaks, all of the rest break.
I also modified the ASHX service to return base64 data instead of a stream, then bound the base64 to the src. In the debugger you can see the base64 assigned to the src of the img elements that show the broken image icon. So the data is there for sure, but IE just isn't rendering it...
I tried to recreate this problem outside of our SharePoint environment in this fiddle using knockout JS. Basically, I grab a ton of big images and throw them on the screen with each button click. But it works just fine. It works perfectly if I use jQuery too.
http://jsfiddle.net/bsdez92f/
Not sure where to go from here.
Any ideas?
So it turns out that the image size was causing a problem. I scaled the images down to thumbnail size on the server side and returned that to the browser. All is working fine at this point.
Related
I am still new to Syncfusion. Currenly I've done a table with a script (document.ready) function to merge the table cells with similar values. The table have been displayed on Google Chrome successfully with my localhost and the columns of the table containing similar values have been merged successfully as well. A function of generating the webpage to PDF works successfully, but the columns of the table displayed on the PDF file do not merge, so I assume that the script file is not rendered in my PDF function.
This is my PDF Function:
private void printpdf()
{
//printpdf
//Initialize HTML to PDF converter
HtmlToPdfConverter htmlConverter = new HtmlToPdfConverter(HtmlRenderingEngine.WebKit);
WebKitConverterSettings settings = new WebKitConverterSettings();
//Set WebKit path
settings.WebKitPath = Server.MapPath("~/QtBinaries");
settings.EnableJavaScript = true;
settings.AdditionalDelay = 5000;
//Assign WebKit settings to HTML converter
htmlConverter.ConverterSettings = settings;
//Get the current URL
string url = HttpContext.Current.Request.Url.AbsoluteUri;
//Convert URL to PDF
Syncfusion.Pdf.PdfDocument document = htmlConverter.Convert(url);
//Save the document
document.Save("Output.pdf", HttpContext.Current.Response, HttpReadType.Save);
}
This is my Script Function on aspx file:
$(document).ready(function () {
-
-
-
};
The webKit rendering engine will preserve the PDF document like how the input HTML file displayed on WebKit (example, safari) based web browsers. So, kindly ensure the preservation of your webpage on WebKit based browser. If it is not possible, kindly share with us the complete HTML file (save the webpage from a web browser and share the complete HTML file with styles, scripts, etc.,) to us. So, that it will be helpful for us to analyze and assist you further on this.
If your web page is rendering properly in the chrome browser, kindly try our latest Blink rendering engine for the conversion. It will preserve the output PDF document like how the input HTML is displayed on chromium-based browsers. Please refer below link for more information,
https://help.syncfusion.com/file-formats/pdf/convert-html-to-pdf/blink
https://www.syncfusion.com/kb/10258/how-to-convert-html-to-pdf-in-azure-using-blink
I have a problem (or may be two) with saving files using HTML5 File API.
A files comes from the server as a byte array and I need to save it. I tried several ways described on SO:
creating blob and opening it in a new tab
creating a hidden anchor tag with "data:" in href attribute
using FileSaver.js
All approaches allow to save the file but with breaking it by changing the encoding to UTF-8, while the file (in current test case) is in ANSI. And it seems that I have to problems: at the server side and at the client side.
Server side:
Server side is ASP.NET Web API 2 app, which controller sends the file using HttpResponseMessage with StreamContent. The ContentType is correct and corresponds with actual file type.
But as can be seen on the screenshot below server's answer (data.length) is less then actual file size calculated at upload (file.size). Also here could be seen that HTML5 File object has yet another size (f.size).
If I add CharSet with value "ANSI" to server's response message's ContentType property, file data will be the same as it was uploaded, but on saving result file still has wrong size and become broken:
Client side:
I tried to set charset using the JS File options, but it didn't help. As could be found here and here Eli Grey, the author of FileUplaod.js says that
The encoding/charset in the type is just metadata for the browser, not an encoding directive.
which means, if I understood it right, that it is impossible to change the encoding of the file.
Issue result: at the end I can successfully download broken files which are unable to open.
So I have two questions:
How can I save file "as is" using File API. At present time I cannot use simple way with direct link and 'download' attribute because of serverside check for access_token in request header. May be this is the "bottle neck" of the problem?
How can I avoid setting CharSet at server side and also send byte array "as is"? While this problem could be hacked in some way I guess it's more critical. For example, while "ANSI" charset solves the problem with the current file, WinMerge shows that it's encoding is Cyrillic 'Windows-1251' and also can any other.
P.S. the issue is related to all file types (extensions) except *.txt.
Update
Server side code:
public HttpResponseMessage DownloadAttachment(Guid fileId)
{
var stream = GetFileStream(fileId);
var message = new HttpResponseMessage(HttpStatusCode.OK);
message.Content = new StreamContent(stream);
message.Content.Headers.ContentLength = file.Size;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(file.ContentType)
{
// without this charset files sent with bigger size
// than they are as shown on image 1
CharSet = "ANSI"
};
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = file.FileName + file.Extension,
Size = file.Size
};
return message;
}
Client side code (TypeScript):
/*
* Handler for click event on download <a> tag
*/
private downloadFile(file: Models.File) {
var self = this;
this.$service.downloadAttachment(this.entityId, file.fileId).then(
// on success
function (data, status, headers, config) {
var fileName = file.fileName + file.extension;
var clientFile = new File([data], fileName);
// here's the issue ---^
saveAs(clientFile, fileName);
},
// on fail
function (error) {
self.alertError(error);
});
}
My code is almost the same as in answers on related questions on SO: instead of setting direct link in 'a' tag, I handle click on it and download file content via XHR (in my case using Angularjs $http service). Getting the file content I create a Blob object (in my case I use File class that derives from Blob) and then try to save it using FileSaver.js. I also tried approach with encoded URL to Blob in href attribute, but it only opens a new tab with a file broken the same way. I found that the problem is in Blob class - calling it's constructor with 'normal' file data I get an instance with 'wrong' size as could be seen on first two screenshots. So, as I understand, my problem not in the way I try to save my file, but in the way I create it - File API
I'm working on a web application that involves loading images into a canvas object, then manipulating those images beyond recognition. I need to hide the original source image file (a jpeg) so that the user on the client side should not be able to use dev tools to see the original image.
I have tried to encode the images as a base64 and load it via a JSON data file, but even with this method, the inspector tool still shows the original image file (when it is set as the src of my javascript image object). Is there some way that I can encrypt and decrypt the image files, so that the user has no way of seeing the original image (or have it be some garbled image, for example)? Preferably I'd like to do this on the client side, as all my code is client side at the moment. Thanks in advance!
Here is my code for loading the base64 encoded image data via a JSON file:
//LOAD JSON INSTEAD?
$.getJSON( "media/masks.json", function( data ) {
console.log("media/masks.json LOADED");
//loop through data
var cnt = 0;
for (var key in data)
{
if (data.hasOwnProperty(key))
{
// here you have access to
//var id = key;
var imgData = data[key];
//create image object from data
var image = new Image();
image.src = imgData;
console.log('img src: '+ imgData);
var elementId = $scope.masks[cnt].id;
// copy the images to canvases
imagecanvas = document.createElement('CANVAS');
imagecanvas.width = image.width;
imagecanvas.height = image.height;
imagecanvas.getContext('2d').drawImage(image,0,0);
imageCanvases[elementId] = imagecanvas;
}
cnt++;
}
});
This is what I see in the Chrome dev tools Network inspector (exactly what I'm trying to avoid):
I need to hide the original source image file (a jpeg) so that the user on the client side should not be able to use dev tools to see the original image.
That's not possible. There is always a way to get at the image using developer tools. Even if there wasn't, a simple screen capture would defeat whatever measures you put in place.
It is about exporting extension data from options page.
I have array of objects, with stored page screenshots encoded in base64, and some other minor obj properties. I'm trying to export them with this code:
exp.onclick = expData;
function expData() {
chrome.storage.local.get('extData', function (result) {
var dataToSave = result.extData;
var strSt = JSON.stringify(dataToSave);
downloadFn('extData.txt', strSt);
});
}
function downloadFn(filename, text) {
var fLink = document.createElement('a');
fLink .setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
fLink .setAttribute('download', filename);
fLink .click();
}
On button click, get data from storage, stringify it, create fake link, set attributes and click it.
Code works fine if resulting file is under ~1.7 MB, but everything above that produce option page to crash and extension gets disabled.
I can console.log(strSt) after JSON.stringify and everything works fine no matter of the size, if I don't pass it to download function..
Is there anything I can do to fix the code and avoid crash?...or is there any limitation is size when using this methods?
I solved this, as Xan suggested, switching to chrome.downloads (it's extra permission, but works fine)
What I did is just replacing code in downloadFN function, it's cleaner that way
function downloadFn(filename, text) {
var eucTxt = encodeURIComponent(text);
chrome.downloads.download({'url': 'data:text/plain;charset=utf-8,'+eucTxt, 'saveAs': false, 'filename': filename});
}
note that using URL.createObjectURL(new Blob([ text ])) also produce same crashing of extension
EDIT:
as #dandavis pointed (and RobW confirmed), converting to Blob also works
(I had messed code that was producing crash)
This is a better way of saving data locally, because on browser internal downloads page, dataURL downloads can clutter page and if file is too big (long URL), it crashes browser. They are presented as actual URLs (which is raw saved data) while blob downloads are only with id
function downloadFn(filename, text) {
var vLink = document.createElement('a'),
vBlob = new Blob([text], {type: "octet/stream"}),
vUrl = window.URL.createObjectURL(vBlob);
vLink.setAttribute('href', vUrl);
vLink.setAttribute('download', filename);
vLink.click();
}
I created a coupon-creator system that uses HTML 5 canvas to spit out a jpg version of the coupon you create and since I'm not hosting the finalized jpg on a server, I am having trouble retrieving the URL. On some browsers when I drag the image into the address bar all I get is "data:" in the address bar. But on windows, if I drag it into an input field, sometimes it spits out the huge (>200 char) local-temp url. How can I use javascript(?) to find that exact temporary URL of the image generated by my coupon creator and be able to post it on an input form on the same page? Also, it'd be very helpful if you guys know the answer to this as well, as I assume it is correlated with the retrieval of the URL: When I click the link that says "Save it" after it's generated, how can I have it save the created image to the user's computer? Thanks a lot!
This is what I'm using in JS right now to generate the image:
function letsDo() {
html2canvas([document.getElementById('coupon')], {
onrendered: function (canvas) {
document.getElementById('canvas').appendChild(canvas);
var data = canvas.toDataURL('image/jpeg');
// AJAX call to send `data` to a PHP file that creates an image from the dataURI string and saves it to a directory on the server
var mycustomcoupon = new Image();
mycustomcoupon.src = data;
//Display the final product under this ID
document.getElementById('your_coupon').appendChild(mycustomcoupon);
document.getElementById('your_coupon_txt').style.display="block";
}
});
}
Here is the live URL of the creator: http://isleybear.com/coupon/
I ended up dumping this code into the js stated above. It was a pretty simple fix. Then to test it, I set an onclick html element to show the source.
var mycustomcoupon = document.getElementById('your_coupon');
mycustomcoupon.src = data;
}
});
}
function showSource(){
var source = document.getElementById('your_coupon').src;
alert(source);
}