I'm building a game using HTML5 Canvas and Javascript and I'm using JSON formatted tile maps for my levels. The tiles render correctly in FireFox, but when I use Chrome, the JSON fetching fails with a "Origin null is not allowed by Access-Control-Allow-Origin." I was using jQuery's $.ajax command and all my files are in one directory.
I would use this post's solution, but I can't use the web server solution.
Is there any other way to fetch JSON files to be parsed and read from? Something akin to loading an image just by giving the URL? Or is there some way to quickly convert my JSON files into globally available strings so I can parse it with JSON.parse()?
Why is the local web server not an option? Apache is free, can be installed on anything, and easy to use, IMO. Also, for Chrome specifically, look into --allow-file-access-from-files
But if nothing else works, maybe you could just add links to the files in script tags, and then append var SomeGlobalObject = ... to the top of each file. You might even be able to do this dynamically by using JS to append the script tag to head. But in the end, instead of using AJAX, you can just do JSON.parse(SomeGlobalObject)
In other words, load the files into the global namespace by adding script tags. Normally this would be considered bad practice, but used ONLY for testing, in the absence of any other options, it may work.
One option which may work for you in Chrome is to invoke the browser with the command line switch --allow-file-access-from-files. This question addresses the issue : Google Chrome --allow-file-access-from-files disabled for Chrome Beta 8
Another possibility is to fetch the JSON data as a script, setting a global variable to the JSON value
Related
Please read carefully before marking as dupe.
I want to read a javascript file on frontend. The javascript file is obviously being used as a script on the webpage. I want to read that javascript file as text, and verify if correct version of it is being loaded in the browser. From different chunks of text in the js file, I can identify what version is actually being used in the end user's browser. The js file is main.js which is generated by angular build.
I know we can do something like creating a global variable for version or some mature version management. But currently, on production site, that will mean a new release, which is couple of months from now. Only option I have right now is html/js page, which can be directly served from production site, without waiting for new release.
So my question is, is it possible we can read a javascript file as text in hmtl/js code in the browser.
an idea can be :
use fetch api to get a container that can be use to async load the script
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
use text() method which return a promise to get text content
fetch('http://localhost:8100/scripts.js').then((res) => res.text()).then(scriptContent => {
// scriptContent contain the content of your script
// if (scriptContent.includes('version : 1.1.1')
console.log(scriptContent);
});
this is absolutely not an efficient way, but since you want to work without any new version release or working with a version management system
here is the thing
assume file's checksum (md5 sha125 or anything) of V1.0 equals X and you will calculate before the coding part.
if checksum(X) != X{
location.reload()
}
would help for a security features too since it's an important project.
another way of controlling this situation is changing the main.js file's name if it is possible.
When I inspect a webpage I can see javascript files with name and a query string attached to their name. Like: jquery-1.9.1.min.js?ctag=0$$16.0.4230.1217.
I googles ctag but didn't find anything useful which I can understand.
I want to know what is the difference between files with ctag and without it?
(jquery-1.9.1.min.js vs. jquery-1.9.1.min.js?ctag=0$$16.0.4230.1217)
When you see http parameters added to the end of a script url it's usually for one of two reasons.
To cache the script in the browser.
Usually a version number is added, the cache can then be forced to update by changing the version. i.e.
http://example.com/js/myscrpt.js?ver=0.4
To send data to the server.
It might be that the script being returned is actually generated server side and the parameter is sending a value that is used in the generation of that script. i.e.
http://example.com/js/myscript.js?userid=935284025805
UPDATE: searching the web, it seems that links deployed on Sharepoint using JSLINK adds a ctag parameter to the urls of javascript files. It's possible the link is from a Sharepoint site, see here and here for Sharepoint questions about the added ctag
I'm thinking of doing some online file manipulation for mobile users, the idea being that the user provides a URL to the file, then the file contents are modified by the JS, and can then be downloaded. But I haven't been able to figure out how to get the file when it's on a separate domain using just JS.
Is this possible? If so any hints or examples would be appreciated.
Just wanted to add that part of what I wanted to do was make it available without my hosting it. I'm thinking of something like a a file they can host somewhere,and then all of the bandwidth is their own...and that of wherever they are getting the file from of course.
The only way to load contents of a file on another domain is from within a <script> tag. This is how JSONP works. Look into getting your target file into this format.
The other way would be to use a local proxy. Create a web service method that loads and returns the contents of the file, then call that locally using your favorite JavaScript framework.
Depending on how you think of public webservices, and within some limitations I'm still mapping, you can do this using an ajax call to YQL, like so.
(will expand the answer later).
http://query.yahooapis.com/v1/public/yql?q=select%20%2a%20from%20data.uri%20where%20url=%22http://t3.gstatic.com/images?q=tbn:ANd9GcSyART8OudfFJQ5oBplmhZ6HIIlougzPgwQ9qcgknK8_tivdW0EOg%22
One of the limitations of this method is file size, it currently tops out at 25k.
I'm making an application that draws a lot of widgets and tiles on a canvas. The core UI will be defined by a long string of characters and drawn at page load by javascript. Since that core UI is big, >250K, and never changes, whats a good way to cache that?
I know I COULD just stick it in a variable at the top of the file, but is there a more efficient way? Like if wrote:
var img = new Image();
img.src = 'moose.png'
I assume that the browser will download and cache this image, so that on subsequent requests it won't have to hit my server again. But how do I do that with a chunk of text?
EDIT: basically I'm looking for an alternative to this:
var myUI = "AAAABBBCBVBVNCNDGAGAGABVBVB.... etc. for about 20 pages";
You can create a JavaScript file that contains the string of text.
var text='.....';
Then you can just include the script:
<script src="/ui.initialization.js" type="text/javascript"></script>
Followed by whatever other javascript you use to render the UI. The browser will cache the js file.
EDIT
I'm assuming you're opposed to having the long string of text inline. By moving it to a separate file, you allow the browser to cache it (if it's truly static and your server is configured with proper cache-control for static resources).
Here's some information for tweaking caching on Apache (I'm assuming you're running PHP on Apache).
Most static resources are cache-able by a browser. Just put your data in a .txt, .dat, .xml or whatever (even a .js) and load it with your javascript via AJAX.
time to download 250K over anything above 1Mbps thruput is < 1 second .. this is an issue for you?
And the very file you are downloading that contains that javascript with 250K baggage is going to be cached itself, probably.
You can use google gears or the new HTML 5 data storage features, supported by FF 3.5 and others.
Use Google page speed or YSlow to figure out what other (HTTP) improvments you can do.
I use curl, in php and httplib2 in python to fetch URL.
However, there are some pages that use JavaScript (AJAX) to retrieve the data after you have loaded the page and they just overwrite a specific section of the page afterward.
So, is there any command line utility that can handle JavaScript?
To know what I mean go to: monster.com and try searching for a job.
You'll see that the Ajax is getting the list of jobs afterward. So, if I wanted to pull in the jobs based on my keyword search, I would get the page with no jobs.
But via browser it works.
you can use PhantomJS
http://phantomjs.org
You can use it as below :
var page=require("webpage");
page.open("http://monster.com",function(status){
page.evaluate(function(){
/* your javascript code here
$.ajax("....",function(result){
phantom.exit(0);
}); */
});
});
Get FireBug and see the URL for that Ajax request. You may then use curl with that URL.
There are 2 ways to handle this. Write your screen scraper using a full browser based client like Webkit, or go to the actual page and find out what the AJAX requesting is doing and do request that directly. You then need to parse the results of course. Use firebug to help you out.
Check out this post for more info on the subject. The upvoted answer suggests using a test tool to drive a real browser.
What's a good tool to screen-scrape with Javascript support?
I think env.js can handle <script> elements. It runs in the Rhino JavaScript interpreter and has it's own XMLHttpRequest object, so you should be able to at least run the scripts manually (select all the <script> tags, get the .js file, and call eval) if it doesn't automatically run them. Be careful about running scripts you don't trust though, since they can use any Java classes.
I haven't played with it since John Resig's first version, so I don't know much about how to use it, but there's a discussion group on Google Groups.
Maybe you could try and use features of HtmlUnit in your own utility?
HtmlUnit is a "GUI-Less browser for
Java programs". It models HTML
documents and provides an API that
allows you to invoke pages, fill out
forms, click links, etc... just like
you do in your "normal" browser.
It has fairly good JavaScript support
(which is constantly improving) and is
able to work even with quite complex
AJAX libraries, simulating either
Firefox or Internet Explorer depending
on the configuration you want to use.
It is typically used for testing
purposes or to retrieve information
from web sites.
Use LiveHttpHeaders a plug in for Firefox to see all URL details and then use the cURL with that url.
LiveHttpHeaders shows all information like type of method(post or get) and headers body etc.
it also show post or get parameters in headers
i think this may help you.