So I am building a huge project in React.Js and VSCode, and I have had no problems up to this point.
Recently when dealing with this json object, my program (on localhost:3000) just won't save, and then when I refresh the page, it just crashes.
I can't even close the tab when this happens. It's just frozen in refreshing. I have to Ctl + ALt+ Del to shut it down. It seems like it is happening randomly.
I am using Google Chrome.
What is going wrong? What do I do? Thank you.
for (var obj in json1) {
const json = json1[obj];
for (var i = 0; i < json.length; i++) {
const apt = json[i];
// Do some stuff with the big json object
}
}
Related
I am using Piwik/Matomo's tracker to provide my users with custom JS trackers, and provide a curated dashboard of analytics that are tailor made for them. One problem I am facing consistently is verifying if the tracker is installed and working correctly.
I have tried using file_get_contents and/or cURL to fetch the page and check if the tracker exists, but this doesn't always work. So I am trying to instead simulate a visit, and see if the tracker sends me any data when it happens.
Since fget/curl do not trigger javascript, is there an alternative (and lightweight) method to fire the page's javascript and trigger a visit for testing?
Update : I implemented this by using PhantomJS as suggested, with the overall function being something like this. Haven't yet tested this extensively for all my users, is there a more elegant solution? -
checktracker
{
if (data exists & data > 0)
{
return true
}
else if (data exists & data = 0)
{
simulate visit with phantomJS //Relevant to question
check again
if ( still 0 )
{
return false
}
else
{
return true
}
}
else
{
invalid site id
}
}
So you want to automatically check if a specific website has integrated Matomo correctly? I recently wanted to do the same to create a browser extention to quickly debug common errors.
One way would be checking the DOM. The Matomo Tracking Code adds a <script>-Tag to the website, so you can check the existence of it via JavaScript:
function getDomElements() {
var allElements = document.getElementsByTagName('script');
for (var i = 0, n = allElements.length; i < n; i++) {
if (allElements[i].hasAttribute("src") && allElements[i].getAttribute("src").endsWith("piwik.js")) { // TODO: support renamed piwik.js
return allElements[i];
}
}
}
But if you also have access to the JS console, the probably better solution would be checking if the tracking code has initialized correctly:
If something like this outputs your Matomo URL, chances are high that the tracking code is embedded correctly.
var tracker == window.Piwik.getAsyncTracker()
console.log(tracker.getPiwikUrl())
Of course it can still fail (e.g. if the server returns 403 on the piwik.php request, but if you control the server, this shouldn't happen)
To run the check automatically, you could look into Headless Chrome or Firefox.
For a bit of fun and to learn more about JS and the new HTML5 specs, I decided to build a file uploader plugin for my own personal use.
After I select my files, I send the files to my Web Worker to be spliced into 1mb chunks and uploaded to my server individually. I opted to send them individually to benefit from individual progress callbacks and pause/resume functionality (later).
The problem I'm getting is when I select a lot of files to upload. If I only select 8, no problems. If I select 99, the server rejects/aborts after about the 20th file, although, sometimes, it could stop after 22, 31, 18 - toally random.
Firefox can get away with more than Chrome before aborting. Chrome calls them 'failed' and Firefox calls them 'aborted'. Firefox usually aborts the files after about file 40. Not only this but my test server becomes unresponsive and throws a 'the connection was reset' error - becoming responsive again less than 20-seconds later.
Because I'm using a Web Worker, I am setting my XmlHttpRequests to synchronous to allow each request to complete before starting a new one and the PHP script is on the same domain, so I'm baffled to see the requests rejected and would love to hear what is wrong with my code that's causing this to happen.
This is the plugin part that sends to the Worker. Pretty irrelevant but who knows:
var worker = new Worker('assets/js/uplift/workers/uplift-worker.js');
worker.onmessage = function(e) {
console.log(e.data);
};
worker.postMessage({'files':filesArr});
And this is uplift-worker.js:
var files = [], p = true;
function upload(chunk) {
var upliftRequest = new XMLHttpRequest();
upliftRequest.onreadystatechange = function() {
if (upliftRequest.readyState == 4 && upliftRequest.status == 200) {
// do something
}
};
upliftRequest.setRequestHeader("Cache-Control", "no-cache");
upliftRequest.setRequestHeader("X-Requested-With", "XMLHttpRequest");
upliftRequest.open('POST', '../php/uplift.php', false);
upliftRequest.send(chunk);
}
function processFiles() {
for (var j = 0; j < files.length; j++) {
var blob = files[j];
const BYTES_PER_CHUNK = 1024 * 1024; // 1mb chunk sizes.
const SIZE = blob.size;
var start = 0,
end = BYTES_PER_CHUNK;
while (start < SIZE) {
var chunk = blob.slice(start, end);
upload(chunk);
start = end;
end = start + BYTES_PER_CHUNK;
}
p = j == (files.length -1);
self.postMessage(blob.name + " uploaded successfully");
}
}
self.addEventListener('message', function(e) {
var data__Files = e.data.files;
for (var j = 0; j < data__Files.length; j++) {
files.push(data__Files[j]);
}
if (p) processFiles();
});
BUGS FOUND:
I managed to get this from Chrome console:
ERROR: Line 27 in
http://xxxxxxx/demos/uplift/assets/js/uplift/workers/uplift-worker.js
: Uncaught NetworkError: Failed to execute 'send' on
'XMLHttpRequest': Failed to load
'http://xxxxxxx/demos/uplift/assets/js/uplift/php/uplift.php'.
Which points to the Worker script line: upliftRequest.send(chunk);.
Firebug didn't give me much to work with at all but this shows the aborted requests:
And this shows the header that is sent with the requests:
I initially thought it was a problem server-side, so I removed all PHP from uplift.php and left an empty page to simply test the upload-to-browser parts and posting the requests, but the problems continued.
UPDATE:
I'm beginning to think my hosting provider are limiting request rates by using Apache Mod Security rules - possibly to prevent my IP from attacking the server with brute-force. Adding to that, my uploader works fine on my localhost (MAMP).
I did a little more research on my new suspicions. If my homemade upload plugin was having troubles sending multiple files/requests to my host, then surely some of the other popular upload plugins available, that use the same technology and are posting files to the same host, would have similar complaints posted online. That yielded some good results, with many people backing up the experience I'm having. One guy uploads 'lots of images' to the same host, using Uploadify HTML5 (which also sends individual requests), and his requests get blocked too. I suppose I'd better contact them to see what the deal is with their rate-limiting.
possible problem
I think this is a server-side issue, even with a plain php file.. the server will open a new thread for each request. Check it with top in the console.
You are uploading the chunks in a while loop, without waiting till the previous chunk upload finished..
suggestion
I would create an array of all chunks and do upload(chunks) and manage the concurrency there, onreadystatechange is a good place to go with the next chunk in the array..
I would like to automate the process of visiting a website, clicking a button, and saving the file. The only way to download the file on this site is to click a button. You can't navigate to the file using a url.
I have been trying to use phantomjs and casperjs to automate this process, but haven't had any success.
I recently tried to use brandon's solution here
Grab the resource contents in CasperJS or PhantomJS
Here is my code for that
var fs = require('fs');
var cache = require('./cache');
var mimetype = require('./mimetype');
var casper = require('casper').create();
casper.start('http://www.example.com/page_with_download_button', function() {
});
casper.then(function() {
this.click('#download_button');
});
casper.on('resource.received', function (resource) {
"use strict";
for(i=0;i < resource.headers.length; i++){
if(resource.headers[i]["name"] == "Content-Type" && resource.headers[i]["value"] == "text/csv; charset-UTF-8;"){
cache.includeResource(resource);
}
}
});
casper.on('load.finished', function(status) {
for(i=0; i< cache.cachedResources.length; i++){
var file = cache.cachedResources[i].cacheFileNoPath;
var ext = mimetype.ext[cache.cachedResources[index].mimetype];
var finalFile = file.replace("."+cache.cacheExtension,"."+ext);
fs.write('downloads/'+finalFile,cache.cachedResources[i].getContents(),'b');
}
});
casper.run();
I think the problem could be caused by my cachePath being incorrect in cache.js
exports.cachePath = 'C:/Users/username/AppData/Local/Ofi Labs/PhantomJS';
Should I be using something in adition to the backslashes to define the path?
When I try
casperjs --disk-cache=true export_script.js
Nothing is downloaded. After a little debugging I have found that cache.cachedResources is always empty.
I would also be open to solutions outside of phantomjs/casperjs.
UPDATE
I am not longer trying to accomplish this with CasperJS/PhantomJS.
I am using the chrome extension Tampermonkey suggested by dandavis.
Tampermonkey was extremely easy to figure out.
I installed Tampermonkey, navigated to the page with the download link, and then clicked New Script under tampermonkey and added my javascript code.
document.getElementById("download_button").click();
Now every time I navigate to the page in my browser, the file is downloaded. I then created a batch script that looks like this
set date=%DATE:~10,4%_%DATE:~4,2%_%DATE:~7,2%
chrome "http://www.example.com/page-with-dl-button"
timeout 10
move "C:\Users\user\Downloads\export.csv" "C:\path\to\dir\export_%date%.csv"
I set that batch script to run nightly using the windows task scheduler.
Success!
Your button most likely issues a POST request to the server.
In order to track it:
Open Network tab in Chrome developer tools
Navigate to the page and hit the button.
Notice which request led to file download. Right click on it and copy as cURL
Run copied cURL
Once you have cURL working you can schedule downloads using cron or Task Scheduler depending on operation system you are using.
In an attempt to prevent clever visitors from cheating the system on my site, I'm trying to secretly record and send to the server anything typed into the browser's command line. The relevant chunk of code is
window.console.command = function (thiscmd) {
alert('thiscmd = ' + thiscmd); // test
if (SG.consoleCommands) SG.consoleCommands += thiscmd;
}
but the alert isn't working. I'm using Google Chrome, but tried in IE as well.
Relevant fiddle: Go to http://jsfiddle.net/63cL1qoe/ and typed var x = 69; in the console and click enter. Nothing happens.
What am I missing?
Basically, I'm wanting to figure out the best way to check the user's JRE version on a web page. I have a link to a JNLP file that I only want to display if the user's JRE version is 1.6 or greater. I've been playing around with the deployJava JavaScript code (http://java.sun.com/javase/6/docs/technotes/guides/jweb/deployment_advice.html) and have gotten it to work in every browser but Safari (by using deployJava.versionCheck). For whatever reason, Safari doesn't give the most updated JRE version number - I found this out by displaying the value of the getJREs() function. I have 1.6.0_20 installed, which is displayed in every other browser, but Safari keeps saying that only 1.5.0 is currently installed.
I've also tried using the createWebStartLaunchButtonEx() function and specifying '1.6.0' as the minimum version, but when I click the button nothing happens (in any browser).
Any suggestions?
FYI: Here's my code --
if (deployJava.versionCheck('1.6+')) {
var dir = location.href.substring(0,location.href.lastIndexOf('/')+1);
var url = dir + "tmaj.jnlp";
deployJava.createWebStartLaunchButton(url, '1.6.0');
} else {
var noticeText = document.createTextNode("In order to use TMAJ, you must visit the following link to download the latest version of Java:");
document.getElementById('jreNotice').appendChild(noticeText);
var link = document.createElement('a');
link.setAttribute('href', 'http://www.java.com/en/download/index.jsp');
var linkText = document.createTextNode("Download Latest Version of Java");
link.appendChild(linkText);
document.getElementById('jreDownloadLink').appendChild(link);
}
Probably the best bet would be to check the navigator.plugins array.
Here is a quick example that works in Chrome/Firefox. As far as I know, Internet Explorer does not provide access to the plugins array.
function getJavaVersion() {
var j, matches;
for (j = 0;j < navigator.plugins.length;j += 1) {
matches = navigator.plugins[j].description.match(/Java [^\d]+(\d+\.?\d*\.?\d*_?\d*)/i);
if (matches !== null) {
return matches[1];
}
}
return null;
};
console.log(getJavaVersion()); // => 1.6.0_16
Not sure if it helps you but you can specify the minimum jre version as a clause in the jnlp file. Webstart will not launch your app if the requirement is not met.
See the tag.
Also, have a look at
How to check JRE version prior to launch?