Are we allowed to link files directly from Github ?
<link rel="stylesheet" href="https://raw.github.com/username/project/master/style.css"/>
<script src="https://raw.github.com/username/project/master/script.js"></script>
I know this is allowed on Google Code. This way I don't have to worry about updating a local file.
The great service RawGit was already mentioned, but I'll throw another into the ring: GitCDN.link
Benefits:
Lets you link to specific commits, as well as auto-get the latest (aka master)
Incurs no damage from high traffic volumes; RawGit asks that it's dev.rawgit.com links be only used during development, where as GitCDN give you access to the latest version, without the danger of the servers exploding
Give you the option of auto minifying your HTML, CSS and JavaScript, or serving it as written (https://min.gitcdn.link).
Adds compression (GZip)
Adds all the correct headers (Content-Type, cache-control, e-tag, etc)
Full disclosure, I'm a project maintainer at GitCDN.link
You can use external server rawgithub.com. Just remove a dot between words 'raw' and 'github' https://raw.github.com/.. => https://rawgithub.com/ and use it. More info you find in this question.
However, according to the rawgithub website it will be shutting down at the end of October 2019.
You can link directly to raw files, but it's best not to do it since the raw files always get sent with a plain/text header and can cause loading problems.
You need carry out the following steps
Get the raw url of the file from github. Which is something like https://raw.githubusercontent.com/username/folder/example.css
Visit http://rawgit.com/. Paste the git url above in the input box. It will generate two url's, one for development and other for production purpose.
Copy any one of them and you are done.
The file will act as a CDN. You can also use gist urls.
GitHub Pages: https://yourusername.github.io/script.js
GitHub repo raw files: https://github.com/yourusername/yourusername.github.io/blob/master/script.js
Use GitHub Pages, DO NOT use raw files.
Reason:
GitHub Pages are based on CDN, raw files are not. Accessing raw files will directly hit on GitHub servers and increase server load.
Add a branch your project using the name "gh-pages" and then you'll (shortly after branching) be able to use a direct URL such as https://username.github.io/project/master/style.css (using your URL, and assuming "style.css" is a file in the "master" folder in the root of your "project" repository...and that your Github account is "username").
For those who ended up in this post and just want to get the raw link from an image in GitHub:
If it is the case of an image, you can just add '?raw=true' at the end of the link to the file.
E.g.
Original link:
https://github.com/githubusername/repo_name/blob/master/20160309_212617-1.png
Raw link:
https://github.com/githubusername/repo_name/blob/master/20160309_212617-1.png?raw=true
Use jsdelivr.com
Copied directly from https://www.jsdelivr.com/?docs=gh:
load any GitHub release, commit, or branch
note: we recommend using npm for projects that support it
https://cdn.jsdelivr.net/gh/user/repo#version/file
load jQuery v3.2.1
https://cdn.jsdelivr.net/gh/jquery/jquery#3.2.1/dist/jquery.min.js
use a version range instead of a specific version
https://cdn.jsdelivr.net/gh/jquery/jquery#3.2/dist/jquery.min.js
https://cdn.jsdelivr.net/gh/jquery/jquery#3/dist/jquery.min.js
omit the version completely to get the latest one
you should NOT use this in production
https://cdn.jsdelivr.net/gh/jquery/jquery/dist/jquery.min.js
add ".min" to any JS/CSS file to get a minified version
if one doesn't exist, we'll generate it for you
https://cdn.jsdelivr.net/gh/jquery/jquery#3.2.1/src/core.min.js
add / at the end to get a directory listing
https://cdn.jsdelivr.net/gh/jquery/jquery/
After searching for this same functionality, I ended up writing my own PHP script to act as a proxy. The trouble I kept running into is even when you get the RAW version/link from Github and link to it in your own page, the header sent over was 'text/plain' and Chrome was not executing my JavaScript file from Github. I also didn't like the other links posted for using third party services because of the obvious security/tampering issues possible.
So using this script, I can pass over the RAW link from Github, have the script set the correct headers, and then output the file as if it were coming from my own server. This script can also be used with a secure application to pull in non-secure scripts without throwing SSL errors warning of "Non-secure links used".
Linking:
<script src="proxy.php?link=https://raw.githubusercontent.com/UserName/repo/master/my_script.js"></script>
proxy.php
<?php
###################################################################################################################
#
# This script can take two URL variables
#
# "type"
# OPTIONAL
# STRING
# Sets the type of file that is output
#
# "link"
# REQUIRED
# STRING
# The link to grab and output through this proxy script
#
###################################################################################################################
# First we need to set the headers for the output file
# So check to see if the type is specified first and if so, then set according to what is being requested
if(isset($_GET['type']) && $_GET['type'] != ''){
switch($_GET['type']){
case 'css':
header('Content-Type: text/css');
break;
case 'js':
header('Content-Type: text/javascript');
break;
case 'json':
header('Content-Type: application/json');
break;
case 'rss':
header('Content-Type: application/rss+xml; charset=ISO-8859-1');
break;
case 'xml':
header('Content-Type: text/xml');
break;
default:
header('Content-Type: text/plain');
break;
}
# Otherwise, try and determine what file type should be output by the file extension from the link
}else{
# See if we can find a file type in the link specified and set the headers accordingly
# If css file extension is found, then set the headers to css format
if(strstr($_GET['link'], '.css') != FALSE){
header('Content-Type: text/css');
# If javascript file extension is found, then set the headers to javascript format
}elseif(strstr($_GET['link'], '.js') != FALSE){
header('Content-Type: text/javascript');
# If json file extension is found, then set the headers to json format
}elseif(strstr($_GET['link'], '.json') != FALSE){
header('Content-Type: application/json');
# If rss file extension is found, then set the headers to rss format
}elseif(strstr($_GET['link'], '.rss') != FALSE){
header('Content-Type: application/rss+xml; charset=ISO-8859-1');
# If css xml extension is found, then set the headers to xml format
}elseif(strstr($_GET['link'], '.xml') != FALSE){
header('Content-Type: text/xml');
# If we still haven't found a suitable file extension, then just set the headers to plain text format
}else{
header('Content-Type: text/plain');
}
}
# Now get the contents of our page we're wanting
$contents = file_get_contents($_GET['link']);
# And finally, spit everything out
echo $contents;
?>
If your webserver has active allow_url_include, GitHub serving the files as raw plain/text is not a problem since you can include the file first in a PHP script and modify its Headers to the proper MIME type.
Related
So I want to write a REST API in PHP for JSON for consumption on the iPhone, but also a lot of websites and devices. I have the below code, when accessed via a GET statement, returns a file like:
1mdi2o3.part
How do I return something like: users.json
$db->setQuery( "SELECT * FROM users");
$db->query() or die($queryError);
$numRows = $db->getNumRows();
$row = $db->loadObjectList();
// PRINT JSON FILE
header("Content-type: application/json");
for($i = 0 ; $i < $numRows; $i++){
$json_output[$i] = $row[$i];
}
$MYjson_output = json_encode($json_output);
echo $MYjson_output;
Not entirely such what your goal is but here are 3-4 solutions that might work:
Common Solutions
Rewrite
This is probaly the most conventional way of getting clean URIs for your API. If you want the user part of user.json to be dynamic, you can use mod_rewrite (again assuming Apache) to rewrite your URLs to a php handler script. If you are going the conventional RESTful style URL route, you will probably want to use this anyway to achieve clean / separated URLs.
# For grabbing all users
RewriteEngine on
RewriteRule ^users\.json rest.php [L]
# For grabbing one particular user
RewriteEngine on
RewriteRule ^([a-z0-9]+)\.json rest.php?user=$1 [L]
Where rest.php is your PHP handler script.
URL Rewriting without mod-rewrite
If you don't want to use mod_rewrite, you can also do something like
example.com/rest.php/users.json
example.com/rest.php/user-id.json
or even
example.com/rest.php/user-id
example.com/rest.php/user/user-id
Where rest.php is your PHP handler script. You can grab the user-id from the URL (or URI if we're talking RESTful terms) using the $_SERVER global with $_SERVER['REQUEST_URI'].
Other solutions
Changing download name via Content-Disposition:
I believe you want to add the Content-Disposition header...
<?php
header('Content-Disposition: attachment; filename="users.json"');
?>
This will force the file to be downloaded as user.json. This usually isn't the behavior expected for a REST API, but from the way your question was worded, I figured I'd throw it out there.
AddHandler (or AddType)
Assuming you're running an Apache server, you can also just use AddHandler directive to make .json extension files be treated like they are php.
AddHandler application/x-httpd-php .json
Warning: Other plain .json files will be treated as PHP so you'd probably want to set this in a .htaccess file in the directory with the PHP script. This would work fine for this one URI but not ideal if you were exposing many URIs
Welcome to SO. Have you considered using the Zend framework for this application? I've written about this topic before, and if you do use Zend I could probably be of additional help. It would certainly help get you past the basics.
HTH,
-aj
The problem is this line:
header("Content-type: application/json");
I know this looks like the right way to do (after all, application/json is the official MIME type for JSON content), but it causes most browsers to present you with a file download dialog, when you just want to see the text. You can use the text/plain encoding to avoid this. Note that your AJAX/iPhone/... app probably doesn't care about the content type, so your API will work in both cases.
Also see this blog post which provides some more context.
I've made a simple website that works fine on localhost. So I've put it on an IIS Windows 2008r2 server and now my PHP scripts don't write to my JSON files anymore. I've checked the server and PHP is installed on it so I don't really know what's wrong or where to look.
I'm still not getting it to work so thought I'd explain the situation in more detail.
So this script works on localhost but not on IIS server.
<?php
$myFile = "../json/countries.json";
$fh = fopen($myFile, 'w') or die("can't open file");
$stringData = json_encode($_POST["data"]);
fwrite($fh, $stringData);
fclose($fh)
?>
I've tried adding:
error_reporting(E_ALL);
ini_set('display_errors', '1');
and
chmod("../json/countries.json", 0644);
to the php but not seeing any different results or any errors.
Here's the javascript function that starts the process, and outputting the object to the console does show the correct data to be saved.
function saveJson(object, file) {
console.log("Saving JSON data: " + JSON.stringify(object));
$.ajax
({
type: "POST",
dataType : 'json',
async: false,
url: file,
data: { data:object },
success: function () {console.log("Thanks!"); },
failure: function() {console.log("Error!");}
});
}
Still the json files are not being changed.
Anyone good with Windows Server and php that might know why?
Thanks
This kind of problem occurs because of three reason only:
Don't have proper Directory owner.Ex:- In Apache default user is www:data.
First check the directory owner.
You don't have sufficient write permission for directory i.e. 755
Check directory permission
Path is incorrect to a directory to write or upload file.
Check the directory path where you are writing the file.
According to php document Installation on Windows systems default user is IUSER
Hence you have to set the directory owner 'IUSR' is part of the IIS_IUSRS group, If its not works then try to set 'IIS AppPool\{YouApplicationPoolName}' from IIS AppPool\DefaultAppPool Ex. e.g. IIS AppPool\yourdomain.com. This setting is required for IIS8.
First try to change the owner to IUSER. It should work according to php document.
Set the write permission for the directory.
How to Change a directory permission & user group on windows system.
Rightclick on directory->security->edit
Please check attached screen shot.
What version of PHP are you using. If it is less than 5.2, the json encode function may not be working. Use phpinfo() to find out.
It may also be that $_POST['data'] does not exist. try putting a array instead to see if thats the problem. Like: array(1,2,3);
Also if no warnings are being shown use: error_reporting(E_ALL); OR error_reporting(-1); at the top of your script. It may be that your host is turning off these errors. this will allow you to see what the problem is.
The host also has the ability to deactivate certain PHP function. Contact your customer service regarding this problem. Although I find it very unlikely that they have deactivated the ones you are using.
Make sure the file exist relative to where the PHP code is being executed. If this is the problem, the die command should run. You can see f the file exists using:
if (file_exists ( $filename ) ) echo "file exists";
else echo "file does not exist";
If none of the above work then I haven't got a clue! Sorry.
Add IIS_IUSRS with access "execute, list, read" to security tab on properties.
On my site I have my resources folder outside of the root, for example:
/var/www/html/ is the root directory
/var/www/resources/
I currently have a config file that sets the location of the library so I can include it with php like so:
defined("LIBRARY_PATH")
or
define("LIBRARY_PATH", realpath(dirname(__FILE__) . '/library'));
which works perfectly when I use:
<?php include_once(LIBRARY_PATH . "/file.php"); ?>
but it doesn't work when trying to add Javascript files:
e.g.
<script src="../resources/library/js/test.js"></script>
links to 'www.website.com/resources/library/js/common.js'
or
<script src="<?php echo LIBRARY_PATH; ?>/js/test.js"></script>
links to 'www.website.com/var/www/resources/library/js/test.js'
neither of which work.
Any suggestions on how I can do this without having the js files in or above the root?
Your JavaScript files have to be accessible to the browser because they are executed by the browser and not by the server.
This requires that they have a URL.
Putting the files under the webroot is the standard way to give a static file a URL.
Alternatively, you could write a program (e.g. in PHP) that will read the file and then output it's content to the browser. This is more complicated and makes dealing with cache control headers more fiddly and is not recommended.
Assuming you understand what you're doing and security implications of that!..
You create the linkjs.php script that takes the relative path to the script (from some root dir, perhaps /var/www/resource/js) as a parameter, like:
<script src="/linkjs.php?p=test.js">
In your PHP script you resolve the full file path, check that it's indeed a file under the root dir (to protect against ../ in the parameter), that it's readable by you PHP user, read the content and output it into the response. Don't forget to set content type to text/javascript of course.
Ideally, you should also provide proper caching headers based on the source file modification time, but that is a topic in itself. See the guidelines in other SO questions about proper caching headers for dynamic content.
The upside is that you can do on-the-fly script minification/combining/wrapping/substitutions if you like/need.
When I try to change the linked reference of a local JavaScript file to a GitHub raw version my test file stops working. The error is:
Refused to execute script from ... because its MIME type (text/plain) is not executable, and strict MIME type checking is enabled.
Is there a way to disable this behavior or is there a service that allows linking to GitHub raw files?
Working code:
<script src="bootstrap-wysiwyg.js"></script>
Non-working code:
<script src="https://raw.github.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js"></script>
There is a good workaround for this, now, by using jsdelivr.net.
Steps:
Find your link on GitHub, and click to the "Raw" version.
Copy the URL.
Change raw.githubusercontent.com to cdn.jsdelivr.net
Insert /gh/ before your username.
Remove the branch name.
(Optional) Insert the version you want to link to, as #version (if you do not do this, you will get the latest - which may cause long-term caching)
Examples:
http://raw.githubusercontent.com/<username>/<repo>/<branch>/path/to/file.js
Use this URL to get the latest version:
http://cdn.jsdelivr.net/gh/<username>/<repo>/path/to/file.js
Use this URL to get a specific version or commit hash:
http://cdn.jsdelivr.net/gh/<username>/<repo>#<version or hash>/path/to/file.js
For production environments, consider targeting a specific tag or commit-hash rather than the branch. Using the latest link may result in long-term caching of the file, causing your link to not be updated as you push new versions. Linking to a file by commit-hash or tag makes the link unique to version.
Why is this needed?
In 2013, GitHub started using X-Content-Type-Options: nosniff, which instructs more modern browsers to enforce strict MIME type checking. It then returns the raw files in a MIME type returned by the server, preventing the browser from using the file as-intended (if the browser honors the setting).
For background on this topic, please refer to this discussion thread.
This is no longer possible. GitHub has explicitly disabled JavaScript
hotlinking, and newer versions of browsers respect that setting.
Heads up: nosniff header support coming to Chrome and Firefox
rawgithub.com redirects to rawgit.com So the above example would now be
http://rawgit.com/user/package/master/link.min.js
GitHub Pages is GitHub’s official solution to this problem.
raw.githubusercontent makes all files use the text/plain MIME type, even if the file is a CSS or JavaScript file. So going to https://raw.githubusercontent.com/‹user›/‹repo›/‹branch›/‹filepath› will not be the correct MIME type but instead a plaintext file, and linking it via <link href="..."/> or <script src="..."></script> won’t work—the CSS won’t apply / the JS won’t run.
GitHub Pages hosts your repo at a special URL, so all you have to do is check-in your files and push. Note that in most cases, GitHub Pages requires you to commit to a special branch, gh-pages.
On your new site, which is usually https://‹user›.github.io/‹repo›, every file committed to the gh-pages branch (the most recent commit) is present at this url. So then you can link to your js file via <script src="https://‹user›.github.io/‹repo›/file.js"></script>, and this will be the correct MIME type.
Do you have build files?
Personally, my recommendation is to run this branch parallel to master. On the gh-pages branch, you can edit your .gitignore file to check in all the dist/build files you need for your site (e.g. if you have any minified/compiled files), while keeping them ignored on your master branch. This is useful because you typically don’t want to track changes in build files in your regular repo. Every time you want to update your hosted files, simply merge master into gh-pages, rebuild, commit, and then push.
(protip: you can merge and rebuild in the same commit with these steps:)
$ git checkout gh-pages
$ git merge --no-ff --no-commit master # prepare the merge but don’t commit it (as if there were a merge conflict)
$ npm run build # (or whatever your build process is)
$ git add . # stage the newly built files
$ git merge --continue # commit the merge
$ git push origin gh-pages
https://raw.githack.com/
found this site supply a CDN for
remove nosniff http header
fix mime type by ext name
and this site:
https://rawgit.com/
NOTE: RawGit has reached the end of its useful life
You can also use a browser extension to remove the X-Content-Type-Options response header for raw.githubusercontent.com files. There are a couple of browser extensions to modify response headers.
Requestly: Chrome & Firefox
Modify Header Value: Firefox
Remove X-Content-Type-Options response header using Requestly
Install Requestly for your browser
Open Rules Page
Click create rule & Select Modify Headers
In Source field, enter Url -> Contains -> raw.githubusercontent.com
In Response Headers section, Remove -> X-Content-Type-Options
How to test
I created a simple JS Fiddle to test out if we can use raw github files as scripts in our code. Here is the Fiddle with the following code
<center id="msg"></center>
<script src="https://raw.githubusercontent.com/sachinjain024/practicebook/master/web-extensions-master/storage/background.js"></script>
<script>
try {
if (typeof BG.Methods !== 'undefoned') {
document.getElementById('msg').innerHTML = 'Script evaluated successfully!';
}
} catch (e) {
document.getElementById('msg').innerHTML = 'Problem evaluating script';
}
</script>
If you see Script evaluated successfully!, It means you are able to use raw github file in your code
Otherwise Problem evaluating script indicates that there is some problem while executing the script from raw github source.
Note This will only work on your machine So you won't be able to deploy to production. This approach lets you quickly use the files in any Github repostiory without much hassle.
Disclaimer: I am the author of Requestly So you can blame for anything you don't like.
My use case was to load 'bookmarklets' direclty from my Bitbucket account which has same restrictions as Github. The work around I came up with was to AJAX for the script and run eval on the response string, below snippet is based on that approach.
<script>
var sScriptURL ='<script-URL-here>';
var oReq = new XMLHttpRequest();
oReq.addEventListener("load",
function fLoad() {eval(this.responseText + '\r\n//# sourceURL=' + sScriptURL)});
oReq.open("GET", sScriptURL); oReq.send(); false;
</script>
Note that appending of sourceURL comment is to allow for debuging of the script within browser's developer tools.
To make things clear and short
//raw.githubusercontent.com --> //rawgit.com
Note that this is handled by rawgit's development hosting and not their cdn for production hosting
When a file is uploaded to github you can use it as external source or free hosting. Troy Alford has explained it well above. But to make it easier let me tell you some easy steps then you can use a github raw file in your site:
Here is your file's link:
https://raw.githubusercontent.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js
Now to execute it you have to remove https:// and the dot( . ) between raw and githubusercontent
Like this:
rawgithubusercontent.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js
Now when you will visit this link you will get a link that can be used to call your javascript:
Here is the final link:
https://rawgit.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js
Similarly if you host a css file you have to do it as mentioned above. It is the easiest way to get simple link to call your external css or javascript file hosted on github.
I hope this is helpful.
Referance URL: http://101helper.blogspot.com/2015/11/store-blogger-codes-on-github-boost-blogger-speed.html
I found the error was shown due to the comments at the beginning of file , You can solve this issue , by simply creating your own file without comment and push to git, it shows no error
For proof you can try these two file with same code of easy pagination :
without comment
with comment
I had the same issue as you, what I did is change to
<script type="application/javascript" src="bootstrap-wysiwyg.js"></script>
It works for me.
Example
original
https://raw.githubusercontent.com/antelove19/qrcodejs/master/qrcode.min.js
cdn.jsdelivr.net
https://cdn.jsdelivr.net/gh/antelove19/qrcodejs/qrcode.min.js
Most simple way:
<script type="text/plain" src="http://raw.githubusercontent.com/user/repo/branch/file.js"></script>
Served by GitHub, and very reliable.
With text/plain
Without text/plain
raw.github.com is not truely raw access to file asset,
but a view rendered by Rails.
So accessing raw.github.com is much heavier than needed.
I don't know why raw.github.com is implemented as a Rails view.
Instead of fix this route issue, GitHub added a X-Content-Type-Options: nosniff header.
Workaround:
Put the script to user.github.io/repo
Use a third party CDN like rawgit.com.
Alternatively, if generating your markup server-side, you can just fetch and inject.
For example, in JSTL you could do this:
<script type="text/javascript">
<c:import url="https://raw.github.com/mindmup/bootstrap-wysiwyg/master/bootstrap-wysiwyg.js" />
</script>
They don't allow hotlinking for a reason, so probably bad form if you want to be a good citizen. I'd suggest you cache that javascript and only actually re-fetch periodically as you see fit.
I'm trying to create a website that can be downloaded and run locally by launching its index file.
All the files are local, no resources are used online.
When I try to use the AJAXSLT plugin for jQuery to process an XML file with an XSL template (in sub directories), I receive the following errors:
XMLHttpRequest cannot load file:///C:/path/to/XSL%20Website/data/home.xml. Origin null is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load file:///C:/path/to/XSL%20Website/assets/xsl/main.xsl. Origin null is not allowed by Access-Control-Allow-Origin.
The index file making the request is file:///C:/path/to/XSL%20Website/index.html while the JavaScript files used are stored in file:///C:/path/to/XSL%20Website/assets/js/.
How can I do to fix this issue?
For instances where running a local webserver is not an option, you can allow Chrome access to file:// files via a browser switch. After some digging, I found this discussion, which mentions a browser switch in opening post. Run your Chrome instance with:
chrome.exe --allow-file-access-from-files
This may be acceptable for development environments, but little else. You certainly don't want this on all the time. This still appears to be an open issue (as of Jan 2011).
See also: Problems with jQuery getJSON using local files in Chrome
Essentially the only way to deal with this is to have a webserver running on localhost and to serve them from there.
It is insecure for a browser to allow an ajax request to access any file on your computer, therefore most browsers seem to treat "file://" requests as having no origin for the purpose of "Same Origin Policy"
Starting a webserver can be as trivial as cding into the directory the files are in and running:
python -m http.server
[Edit Thanks #alextercete, for pointing out that it has updated in Python3]
This solution will allow you to load a local script using jQuery.getScript(). This is a global setting but you can also set the crossDomain option on a per-request basis.
$.ajaxPrefilter( "json script", function( options ) {
options.crossDomain = true;
});
What about using the javascript FileReader function to open the local file, ie:
<input type="file" name="filename" id="filename">
<script>
$("#filename").change(function (e) {
if (e.target.files != undefined) {
var reader = new FileReader();
reader.onload = function (e) {
// Get all the contents in the file
var data = e.target.result;
// other stuffss................
};
reader.readAsText(e.target.files.item(0));
}
});
</script>
Now Click Choose file button and browse to the file file:///C:/path/to/XSL%20Website/data/home.xml
Here is an applescript that will launch Chrome with the --allow-file-access-from-files switch turned on, for OSX/Chrome devs out there:
set chromePath to POSIX path of "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
set switch to " --allow-file-access-from-files"
do shell script (quoted form of chromePath) & switch & " > /dev/null 2>&1 &"
Launch chrome like so to bypass this restriction: open -a "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --args --allow-file-access-from-files.
Derived from Josh Lee's comment but I needed to specify the full path to Google Chrome so as to avoid having Google Chrome opening from my Windows partition (in Parallels).
The way I just worked around this is not to use XMLHTTPRequest at all, but include the data needed in a separate javascript file instead. (In my case I needed a binary SQLite blob to use with https://github.com/kripken/sql.js/)
I created a file called base64_data.js (and used btoa() to convert the data that I needed and insert it into a <div> so I could copy it).
var base64_data = "U1FMaXRlIGZvcm1hdCAzAAQA ...<snip lots of data> AhEHwA==";
and then included the data in the html like normal javascript:
<div id="test"></div>
<script src="base64_data.js"></script>
<script>
data = atob(base64_data);
var sqldb = new SQL.Database(data);
// Database test code from the sql.js project
var test = sqldb.exec("SELECT * FROM Genre");
document.getElementById("test").textContent = JSON.stringify(test);
</script>
I imagine it would be trivial to modify this to read JSON, maybe even XML; I'll leave that as an exercise for the reader ;)
You can try putting 'Access-Control-Allow-Origin':'*' in response.writeHead(, {[here]}).
use the 'web server for chrome app'. (you actually have it on your pc, wether you know or not. just search it in cortana!). open it and click 'choose file' choose the folder with your file in it. do not actually select your file. select your files folder then click on the link(s) under the 'choose folder' button.
if it doesnt take you to the file, then add the name of the file to the urs. like this:
https://127.0.0.1:8887/fileName.txt
link to web server for chrome: click me
If you only need to access the files locally then you can include the exact path to the file, rather than using
../images/img.jpg
use
C:/Users/username/directoryToImg/img.jpg
The reason CORS is happening is because you are trying to traverse to another directory within a webpage, by including the direct path you are not changing directory, you are pulling from a direct location.