Open docx Sharepoint LIke - javascript

Im trying to open a .docx File like Sharepoint does.
I've set up an apache2 web server such as the webdav part.
I know that it works with the following small javascript Code:
var obj = new ActiveXObject('SharePoint.OpenDocuments.3');
But when I use that piece of code I've got the Problem that it opens the .docx with the specified edit bar in Word, but if I click on edit the document stays in read only mode.
What could be the problem ?
Below you find my apache/webdav config part.
Another question is, that this piece of code will not work in FF because of the ActiveXObject, has anybody any idea what I could do that it also works in FF ?
Beacause we have already a big application for wich im trying to implement this, it isn't an opinion for us to switch the whole application to sharepoint.
apache 2 conf:
DavLockDB WebDAV/Locks
Directory Uploads
Dav on
ForceType text/plain
AuthType Basic
AuthName "Mein WebDAV"
AuthUserFile C:\Users
Require valid-user
AllowOverride None
Options Indexes
Sincerly
k3n0b1

Solved!
The problem was that my WebDAV Folder was in the same structure as the http docs.
I think it was that, because when i defined another Folder outside of the http docs path it started to work.
All you need is the small piece of javascript:
var obj = new ActiveXObject('SharePoint.OpenDocuments.3');
obj.EditDocument('https://localhost/uploads/****.docx');
and a correct configured webdav in apache2.
Now I just have to handle how it works in ff, chrome, etc.

Related

Firefox not downloading correct file to machine

I have a react application that I built that uses redux, react router v4 and d3 for visualization of data.
My app contains a force directed graph, a table and a histogram. Each one of these views contains clickable nodes, table cell and bars that when clicked, the file should download to user's machine. I recently updated firefox to the latest version 62.0.2 and the download no longer functions as expected, however it still works fine in Chrome.
The files sit on the same domain as the application and I've coded the download to function as so:
let newlink = document.createElement('a');
newlink.setAttribute('download',
'https://www.example.com/docs/xml/file1.xml');
newlink.setAttribute('href', 'https://www.example.com/docs/xml/file1.xml');
newlink.setAttribute('target', '_blank');
document.body.appendChild(newlink);
newlink.click();
What firefox is doing is downloading the index.html file at my app root rather than what is in the url variable (ex. https://www.example.com/docs/xml/file1.xml). The dialog shows that it is in fact trying to save the file with the correct name (Firefox has automatically replaced '/' with underscores to save the file. The domain is correct, but the location does not contain the full URL to the file. Is something happening with the full URL being chopped off somehow?
The type on the dialog box is HTML (which is incorrect, all my files are either xml or txt) and if user selects save or open, it saves index.html or opens up a blank/black webpage. I'm going crazy trying to figure out what is happening here. Please help!
What ended up fixing this in Firefox was unregistering the service worker from the application's domain by navigating to about:serviceworkers (in firefox). I then commented out the register service worker function in my UI code because I'm not using them for this application anyway. For whatever reason the service worker was intercepting the file download and causing the browser to download the index.html file rather than the text file it was supposed to. Once I did these two things, the file was downloaded correctly. If anyone would know why that would be, I'd love a comment.

Why does my AJAX request give an error depending on the file extension?

I have a bit of JavaScript using jQuery that loads data with a quick $.get(url, function(response){ /* ... */}); The data is a straight up text file that is then handled by the JavaScript in that response function.
This has worked for me quite nicely, but I just ran into this problem on my machine: Using the same code, I now get an error saying:
XML Parsing Error: not well-formed Location:
moz-nullprincipal:{74091275-3d54-4959-9613-5005459421ce} Line Number
1, Column 16: image:tiles.png;
---------------^
If I load this from another server, it works perfectly. It's only when I host it on my own PC that I get this error (note that it previously worked perfectly on my own PC as well, which is running Ubuntu and serving the page with Apache). After much headbanging, I found that if I change the extension on the filename I'm loading, it works fine. The file was previously named "test.sprite", and that is when I got the error. If I renamed it to "test.txt" it loads fine.
This error ~seems~ to coincide with a recent upgrade on my system. I upgraded Ubuntu 10.something to 12.04. I'm assuming there was some sort of update in the Apache config that I didn't notice which is causing it to send different headers depending on the extension of the file (the two named here are identical - the .txt is actually just a symlink to the .sprite).
So I have a solution to my immediate problem, but I'd rather not bow to the system's idiosyncrasies. Any idea how I can fix this without renaming the file?
Please note that I'm not an apache expert, but I'll have a crack with pointing you in the right direction.
If undefined, the jQuery AJAX functions will assume the content-type is whatever header Apache has sent back. You can quite simply see what the response is by running your code in Chrome, opening developer tools (Ctrl + Shift + J) and choosing "Network". After clicking on the relevant request you will see the headers coming back, including the content-type.
In your Apache configuration the content-type for the sprite is probably not defined. You can add this with the following line:
AddType 'text/plain; charset=UTF-8' .sprite
This should be in a configuration file parsed by Apache - depending on your version this could be apache.conf, httpd.conf, or another file.
I hope this helps or at least points you in the right direction. Remember to configtest before restarting Apache!
Check out the content-type of the response header, make sure the header you received from the server and your local machine have the same content-type, i.e. same file type , same encoding, something like this: "content-type:text/html; charset=UTF-8".

XMLHttpRequest cannot load file:///. Origin null is not allowed by Access-Control-Allow-Origin [duplicate]

I'm trying to create a website that can be downloaded and run locally by launching its index file.
All the files are local, no resources are used online.
When I try to use the AJAXSLT plugin for jQuery to process an XML file with an XSL template (in sub directories), I receive the following errors:
XMLHttpRequest cannot load file:///C:/path/to/XSL%20Website/data/home.xml. Origin null is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load file:///C:/path/to/XSL%20Website/assets/xsl/main.xsl. Origin null is not allowed by Access-Control-Allow-Origin.
The index file making the request is file:///C:/path/to/XSL%20Website/index.html while the JavaScript files used are stored in file:///C:/path/to/XSL%20Website/assets/js/.
How can I do to fix this issue?
For instances where running a local webserver is not an option, you can allow Chrome access to file:// files via a browser switch. After some digging, I found this discussion, which mentions a browser switch in opening post. Run your Chrome instance with:
chrome.exe --allow-file-access-from-files
This may be acceptable for development environments, but little else. You certainly don't want this on all the time. This still appears to be an open issue (as of Jan 2011).
See also: Problems with jQuery getJSON using local files in Chrome
Essentially the only way to deal with this is to have a webserver running on localhost and to serve them from there.
It is insecure for a browser to allow an ajax request to access any file on your computer, therefore most browsers seem to treat "file://" requests as having no origin for the purpose of "Same Origin Policy"
Starting a webserver can be as trivial as cding into the directory the files are in and running:
python -m http.server
[Edit Thanks #alextercete, for pointing out that it has updated in Python3]
This solution will allow you to load a local script using jQuery.getScript(). This is a global setting but you can also set the crossDomain option on a per-request basis.
$.ajaxPrefilter( "json script", function( options ) {
options.crossDomain = true;
});
What about using the javascript FileReader function to open the local file, ie:
<input type="file" name="filename" id="filename">
<script>
$("#filename").change(function (e) {
if (e.target.files != undefined) {
var reader = new FileReader();
reader.onload = function (e) {
// Get all the contents in the file
var data = e.target.result;
// other stuffss................
};
reader.readAsText(e.target.files.item(0));
}
});
</script>
Now Click Choose file button and browse to the file file:///C:/path/to/XSL%20Website/data/home.xml
Here is an applescript that will launch Chrome with the --allow-file-access-from-files switch turned on, for OSX/Chrome devs out there:
set chromePath to POSIX path of "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
set switch to " --allow-file-access-from-files"
do shell script (quoted form of chromePath) & switch & " > /dev/null 2>&1 &"
Launch chrome like so to bypass this restriction: open -a "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --args --allow-file-access-from-files.
Derived from Josh Lee's comment but I needed to specify the full path to Google Chrome so as to avoid having Google Chrome opening from my Windows partition (in Parallels).
The way I just worked around this is not to use XMLHTTPRequest at all, but include the data needed in a separate javascript file instead. (In my case I needed a binary SQLite blob to use with https://github.com/kripken/sql.js/)
I created a file called base64_data.js (and used btoa() to convert the data that I needed and insert it into a <div> so I could copy it).
var base64_data = "U1FMaXRlIGZvcm1hdCAzAAQA ...<snip lots of data> AhEHwA==";
and then included the data in the html like normal javascript:
<div id="test"></div>
<script src="base64_data.js"></script>
<script>
data = atob(base64_data);
var sqldb = new SQL.Database(data);
// Database test code from the sql.js project
var test = sqldb.exec("SELECT * FROM Genre");
document.getElementById("test").textContent = JSON.stringify(test);
</script>
I imagine it would be trivial to modify this to read JSON, maybe even XML; I'll leave that as an exercise for the reader ;)
You can try putting 'Access-Control-Allow-Origin':'*' in response.writeHead(, {[here]}).
use the 'web server for chrome app'. (you actually have it on your pc, wether you know or not. just search it in cortana!). open it and click 'choose file' choose the folder with your file in it. do not actually select your file. select your files folder then click on the link(s) under the 'choose folder' button.
if it doesnt take you to the file, then add the name of the file to the urs. like this:
https://127.0.0.1:8887/fileName.txt
link to web server for chrome: click me
If you only need to access the files locally then you can include the exact path to the file, rather than using
../images/img.jpg
use
C:/Users/username/directoryToImg/img.jpg
The reason CORS is happening is because you are trying to traverse to another directory within a webpage, by including the direct path you are not changing directory, you are pulling from a direct location.

Is there any way to 'simulate' right-click save-as command or force download of file in the browser with JavaScript?

I have this situation where we have media files stored on a global CDN. Our web app is hosted on it's own server and then when the media assets are needed they are called from the CDN url. Recently we had a page where the user can download file attachments, however some of the file types were opening in the browser instead of downloading (such as MP3). The only way around this was to manually specify the HTTP response to attach the file but the only way I could achieve this was to download the file from CDN to my server and then feed it back to the user, which defeats the purpose of having it on the global CDN. Instead I am wondering if there is some client side solution for this?
EDIT: Just found this somewhere, though I'm not sure if it will work right in all the browsers?
<body>
<script>
function downloadme(x){
myTempWindow = window.open(x,'','left=10000,screenX=10000');
myTempWindow.document.execCommand('SaveAs','null','download.pdf');
myTempWindow.close();
}
</script>
<a href=javascript:downloadme('/test.pdf');>Download this pdf</a>
</body>
RE-EDIT: Oh well, so much for that idea -> Does execCommand SaveAs work in Firefox?
Does your CDN allow you to specify the HTTP headers? Amazon cloudfront does, for example.
I found an easy solution to this that worked for me. Add a URL parameter to the file name. This will trick the browser into bypassing it's built in file mappings. For examaple, instead of http://mydomain.com/file.pdf , set your client side link up to point to http://mydomain.com/file.pdf? (added a question mark)

Generate some xml in javascript, prompt user to save it

I'd like to make an XML document in JavaScript then have a save dialog appear.
It's OK if they have to click before the save can occur.
It's *not* OK if I *have* to use IE to achieve this (I don't even need to support it at all). However, Windows is a required platform (so Firefox or Chrome are the preferred browsers if I can only do this in one browser).
It's *not* OK if I need a web server. But conversely, I don't want to require the JavaScript to be run on a local file only, i.e. elevated privileges -- if possible. That is, I'd like to to run locally or on a *static* host. But just locally is OK.
It's OK to have to bend over backwards to do this. The file won't be very big, but internet access might either be there, be spotty or just not be a possibility at all -- see (3).
So far the only ideas I have seen are to save the XML to an iframe and save that document -- but it seems that you can only do this in IE? Also, that I could construct a data URI and place that in a link. My fear here is that it will just open the XML file in the window, rather than prompt the user to save it.
I know that if I require the JavaScript to be local, I can raise privileges and just directly save the file (or hopefully cause a save dialog box to appear). However, I'd much prefer a solution where I do not require raised privileges (even a Firefox 3.6 only solution).
I apologize if this offends anyone's sensibilities (for example, not supporting every browser). I basically want to write an offline application and Javascript/HTML/CSS seem to be the best candidate considering the complexity of the requirements and the time available. However, I have this single requirement of being able to save data that must be overcome before I can choose this line of development.
How about this downloadify script?
Which is based on Flash and jQuery, which can prompt you dialog box to save file in your computer.
Downloadify.create('downloadify',{
filename: function(){
return document.getElementById('filename').value;
},
data: function(){
return document.getElementById('data').value;
},
onComplete: function(){
alert('Your File Has Been Saved!');
},
onCancel: function(){
alert('You have cancelled the saving of this file.');
},
onError: function(){
alert('You must put something in the File Contents or there will be nothing to save!');
},
swf: 'media/downloadify.swf',
downloadImage: 'images/download.png',
width: 100,
height: 30,
transparent: true,
append: false
});
Using a base64 encoded data URI, this is possible with only html & js. What you can do is encode the data that you want to save (in your case, a string of XML data) into base64, using a js library like jquery-base64 by carlo. Then put the encoded string into a link, and add your link to the DOM.
Example using the library I mentioned (as well as jquery):
<html>
<head>
<title>Example</title>
</head>
<body>
<script>
//include jquery and jquery-base64 here (or whatever library you want to use)
document.write('click to make save dialog');
</script>
</body>
</html>
...and remember to make the content-type something like application/octet-stream so the browser doesn't try to open it.
Warning: some older IE versions don't support base64, but you said that didn't matter, so this should work fine for you.
Without any more insight into your specific requirements, I would not recommend a pure Javascript/HTML solution. From a user perspective you would probably get the best results writing a native application. However if it will be faster to use Javascript/HTML, I recommend using a local application hosting a lightweight web server to serve up your content. That way you can cleanly handle the file saving server-side while focusing the bulk of your effort on the front-end application.
You can code up a web server in - for example - Python or Ruby using very few lines of code and without 3rd party libraries. For example, see:
Making a simple web server in python
WEBrick - Writing a custom servlet
python-trick-really-little-http-server - This one is really simple, and will easily let you server up all of your HTML/CSS/JS files:
"""
Serves files out of its current directory.
Doesn't handle POST requests.
"""
import SocketServer
import SimpleHTTPServer
PORT = 8080
def move():
""" sample function to be called via a URL"""
return 'hi'
class CustomHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
#Sample values in self for URL: http://localhost:8080/jsxmlrpc-0.3/
#self.path '/jsxmlrpc-0.3/'
#self.raw_requestline 'GET /jsxmlrpc-0.3/ HTTP/1.1rn'
#self.client_address ('127.0.0.1', 3727)
if self.path=='/move':
#This URL will trigger our sample function and send what it returns back to the browser
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
self.wfile.write(move()) #call sample function here
return
else:
#serve files, and directory listings by following self.path from
#current working directory
SimpleHTTPServer.SimpleHTTPRequestHandler.do_GET(self)
httpd = SocketServer.ThreadingTCPServer(('localhost', PORT),CustomHandler)
print "serving at port", PORT
httpd.serve_forever()
Finally - Depending on who will be using your application, you also have the option of compiling a Python program into a Frozen Binary so the end user does not have to have Python installed on their machine.
Javascript is not allowed to write to a local machine. Your question is similar to this one.
I suggest creating a simple desktop app.
Is localhost PHP server ok? Web traditionally can't save to hard drive because of security concerns. PHP can push files though it requires a server.
Print to PDF plugins are available for available for all browsers. Install once, print to PDF forever. Then, you can use a javascript or Flash to call a Print function.
Also, if you are developing for an environment where internet access is spotty, conwider using VB.NET or some other desktop language.
EDIT:
You can use the browser's Print function.
Are you looking for something like this?
If PHP is ok, if would be much easier.
With IE you could use document.execCommand, but I note that IE is not an option.
Here's something that looks like it might help, although it will not prompt with SaveAs dialog, https://developer.mozilla.org/en/Code_snippets/File_I%2F%2FOL.
One simple but odd way to do this that doesn't require any Flash is to create an <a/> with a data URI for its href. This even has pretty good cross-browser support, although for IE it must be at least version 8 and the URI must be < 32k. It looks like someone else on SO has more to say on the topic.
Why not use a hybrid flash for client and some server solution server-side. Most people have flash so you can default to client side to conserve resources on the server.

Categories