I'm currently building an app for web automation using AngleSharp. I've managed to log in to a website, but can't find the element I'm looking for using context.Active.QuerySelectorAll.
I understand that this is likely because some of the JavaScript hasn't run in the HTML I'm searching through, as per this link: Is the HTML shown via 'View Source' different from the HTML shown in (Firebug) developer tools?
How do I force AngleSharp to execute all of the JavaScript before I look for the specific element?
code:
var config = AngleSharp.Configuration.Default.WithDefaultLoader().WithCookies().WithJavaScript().WithCss();
var browsingContext = BrowsingContext.New(config);
await browsingContext.OpenAsync("https://users.premierleague.com/");
await browsingContext.Active.QuerySelector<IHtmlFormElement>("form[action='/accounts/login/']").SubmitAsync(new
{
login = "abc#gmail.com",
password = "password"
});
await browsingContext.OpenAsync("https://fantasy.premierleague.com/a/team/my/");
Everything works fine up until this point, and I can confirm that I am logged in. However, I then can't seem to get a value returned for the following:
var x = browsingContext.Active.QuerySelectorAll("*").Where(m => m.ClassName == "ismjs-link ism-link ism-link--more");
And I know that this element exists, as I've checked numerous times through the "Inspect" functionality available on Google Chrome.
What am I missing/ how do I get the JavaScript to run?
Thanks!
Related
onlineCheck.js
window.add = function () {
online = navigator.onLine ? console.log("online") : console.log("offline");
};
module.exports = {
add,
};
pos.js
const online= require("./onlineCheck");
online.add();
This my electron.js project . I need to if offline, getting console log. so I need it implement whole application and it should be working if working now any page. So I tried with like above code. I made separate js file for assign online check function. So I it imported to every other js files. It is not suitable and when application starts, Console logged . but , if I network given offline. then it is not console.log("offline"). I need to solve that issue..
There is an image open in the Preview application, and it is unsaved.
I have tried this code, by building and running a standalone script in Script Editor:
var preview = Application('Preview')
var preview.documents[0].close({ saving: true, savingIn: '/Volumes/USB Drive/Untitled-Image.png' })
I see this in the logs
app = Application("Preview")
app.documents.at(0).Symbol.toPrimitive()
--> Error -1700: Can't convert types.
app.documents.at(0).Symbol.toPrimitive()
--> Error -1700: Can't convert types.
However, I'm not sure I'm reading the docs correctly, and I'm not sure how to further debug this code. A satisfactory alternative here would be to send a keystroke to save and click the Save button.
What can I do to persist the image in Preview to a file?
One can declare paths speculatively in JavaScript using Path('...'), and then use the save command to save the image to the file that will be created at that path:
Preview = Application('com.apple.Preview');
ImageFilepath = Path('/Users/CK/Pictures/Preview_SavedImage.jpg');
Preview.documents[0].save({ in: ImageFilepath });
This code worked for me.
var preview = Application('Preview')
var photo = preview.documents[0]
photo.close({ saving: 'yes', savingIn: Path("/Path/To Some/file_name.png") })
Upon execution, in the Replies log of Script Editor window, something slightly different from the question sample is logged.
app = Application("Preview")
app.close(app.documents.at(0), {savingIn:Path("/Path/To Some/file_name.png"), saving:"yes"})
I was able to figure this out by studying the scripting dictionary for macOS Mojave 10.14 as well as the Release Notes for JXA; there is a similar example under Passing Event Modifiers section of that.
From the Script Editor scripting dictionaries, this is the entry for the close method in the Standard Suite:
close method : Close a document.
close specifier : the document(s) or window(s) to close.
[saving: "yes"/"no"/"ask"] : Should changes be saved before closing?
[savingIn: File] : The file in which to save the document, if so.
Note the key differences between the working code here and the code in the question:
Specify a Path object when passing the savingIn option to close.
Specify a the saving option as one of "yes", "no", or "ask" (not true).
If you're new to JXA, Automator, and/or Script Editor for Mac, checkout the JXA Cookbook.
If this answer helps you, you're probably having a bad time, so best of luck!
I am using Piwik/Matomo's tracker to provide my users with custom JS trackers, and provide a curated dashboard of analytics that are tailor made for them. One problem I am facing consistently is verifying if the tracker is installed and working correctly.
I have tried using file_get_contents and/or cURL to fetch the page and check if the tracker exists, but this doesn't always work. So I am trying to instead simulate a visit, and see if the tracker sends me any data when it happens.
Since fget/curl do not trigger javascript, is there an alternative (and lightweight) method to fire the page's javascript and trigger a visit for testing?
Update : I implemented this by using PhantomJS as suggested, with the overall function being something like this. Haven't yet tested this extensively for all my users, is there a more elegant solution? -
checktracker
{
if (data exists & data > 0)
{
return true
}
else if (data exists & data = 0)
{
simulate visit with phantomJS //Relevant to question
check again
if ( still 0 )
{
return false
}
else
{
return true
}
}
else
{
invalid site id
}
}
So you want to automatically check if a specific website has integrated Matomo correctly? I recently wanted to do the same to create a browser extention to quickly debug common errors.
One way would be checking the DOM. The Matomo Tracking Code adds a <script>-Tag to the website, so you can check the existence of it via JavaScript:
function getDomElements() {
var allElements = document.getElementsByTagName('script');
for (var i = 0, n = allElements.length; i < n; i++) {
if (allElements[i].hasAttribute("src") && allElements[i].getAttribute("src").endsWith("piwik.js")) { // TODO: support renamed piwik.js
return allElements[i];
}
}
}
But if you also have access to the JS console, the probably better solution would be checking if the tracking code has initialized correctly:
If something like this outputs your Matomo URL, chances are high that the tracking code is embedded correctly.
var tracker == window.Piwik.getAsyncTracker()
console.log(tracker.getPiwikUrl())
Of course it can still fail (e.g. if the server returns 403 on the piwik.php request, but if you control the server, this shouldn't happen)
To run the check automatically, you could look into Headless Chrome or Firefox.
So I have a system that essentially enabled communication between two computers, and uses a WebRTC framework to achieve this:
"The Host": This is the control computer, and clients connect to this. They control the clients window.
"The Client": The is the user on the other end. They are having their window controlled by the server.
What I mean by control, is that the host can:
change CSS on the clients open window.
control the URL of an iframe on the clients open window
There are variations on these but essentially thats the amount of control there is.
When "the client" logs in, the host sends a web address to the client. This web address will then be displayed in an iframe, as such:
$('#iframe_id').attr("src", URL);
there is also the ability to send a new web address to the client, in the form of a message. The same code is used above in order to navigate to that URL.
The problem I am having is that on, roughly 1 in 4 computers the iframe doesn't actually load. It either displays a white screen, or it shows the little "page could not be displayed" icon:
I have been unable to reliably duplicate this bug
I have not seen a clear pattern between computers that can and cannot view the iframe content.
All clients are running google chrome, most on an apple powermac. The only semi-link I have made is that windows computers seem slightly more susceptible to it, but not in a way I can reproduce. Sometimes refreshing the page works...
Are there any known bugs that could possibly cause this to happen? I have read about iframe white flashes but I am confident it isn't that issue. I am confident it isn't a problem with jQuery loading because that produces issues before this and would be easy to spot.
Thanks so much.
Alex
edit: Ok so here is the code that is collecting data from the server. Upon inspection the data being received is correct.
conn.on('data', function(data) {
var data_array = JSON.parse(data);
console.log(data_array);
// initialisation
if(data_array.type=='init' && inititated === false) {
if(data_array.duration > 0) {
set_timeleft(data_array.duration); // how long is the exam? (minutes)
} else {
$('#connection_remainingtime').html('No limits');
}
$('#content_frame').attr("src", data_array.uri); // url to navigate to
//timestarted = data_array.start.replace(/ /g,''); // start time
ob = data_array.ob; // is it open book? Doesnt do anything really... why use it if it isnt open book?
snd = data_array.snd; // is sound allowed?
inititated = true;
}
}
It is definitele trying to make the iframe navigate somewhere as when the client launches the iframe changes - its trying to load something but failing.
EDIT: Update on this issue: It does actually work, just not with google forms. And again it isn't everybody's computers, it is only a few people. If they navigate elsewhere (http://www.bit-tech.net for example) then it works just fine.
** FURTHER UPDATE **: It seems on the ones that fail, there is an 'X-Frames-Origin' issue, in that its set the 'SAMEORIGIN'. I dont understand why some students would get this problem and some wouldn't... surely it depends upon the page you are navigating to, and if one person can get it all should be able to?
So the problem here was that the students were trying to load this behind a proxy server which has an issue with cookies. Although the site does not use cookies, the proxy does, and when the student had blocked "third party cookies" in their settings then the proxy was not allowing the site to load.
Simply allowed cookies and it worked :)
iframes are one of the last things to load in the DOM, so wrap your iframe dependent code in this:
document.getElementById('content_frame').onload = function() {...}
If that doesn't work then it's the document within the iframe. If you own the page inside the iframe then you have options. If not...setTimeout? Or window.onload...?
SNIPPET
conn.on('data', function(data) {
var data_array = JSON.parse(data);
console.log(data_array);
// initialisation
if (data_array.type == 'init' && inititated === false) {
if (data_array.duration > 0) {
set_timeleft(data_array.duration); // how long is the exam? (minutes)
} else {
$('#connection_remainingtime').html('No limits');
}
document.getElementById('content_frame').onload = function() {
$('#content_frame').attr("src", data_array.uri); // url to navigate to
//timestarted = data_array.start.replace(/ /g,''); // start time
ob = data_array.ob; // is it open book? Doesnt do anything really... why use it if it isnt open book?
snd = data_array.snd; // is sound allowed?
inititated = true;
}
}
}
I've got a few problems with Google Picker that I just can't seem to solve.
Firstly, I have a problem with signing into my google account via the google picker window (as reported here https://groups.google.com/forum/#!topic/google-picker-api/3VXqKO1BD5g and elsewhere). In short, the picker works perfectly up until the point where it returns from the sign-in action. It fails to load the picker view once the account is signed in. The actions taken are as follows:
Open Google picker
Receive not signed in page, with sign in button.
Button opens a new window for google sign in.
Enter details and sign in. The sign in is successful.
Sign in window closes, and focus returns to the google picker, but it fails to recognise the sign in, receiving instead the above mentioned "The feature you requested is currently unavailable. Please try again later." error. With a js ReferenceError: init is not defined
Secondly, I have a problem in IE10 where the browser will show the "you are not signed in" screen even if I am. Clicking the button will open the sign-in window which closes immediately (sign in recognised?), but nothing happens on the google picker window.
The example found here: http://www-personal.umich.edu/~johnathb/misc/gpicker.html Seems to work just fine on IE10. So I am not sure what the problem is.. Possible differences are:
I have Https enabled on my site (but it didn't seem to make a difference when turned off).
I am currently running my app within an intranet (with internet access though).
Something to do with public IPs or such? But this wouldn't explain why the Google Picker works in Firefox etc.
The code used to load and handle the picker is shown below:
$('.googleDrivePicker').click(function () {
var inputControl = $(this).data('inputid');
// Google Picker API for the Google Docs import
google.load('picker',
'1',
{"language": '#Session["kieli"]',
"callback" : function () {
// Create and render a Picker object for searching images.
var picker = new google.picker.PickerBuilder().
addView(google.picker.ViewId.DOCS).
addView(google.picker.ViewId.IMAGE_SEARCH).
setCallback(function (data) {
// A simple callback implementation.
var url = '';
if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
var doc = data[google.picker.Response.DOCUMENTS][0];
url = doc[google.picker.Document.EMBEDDABLE_URL] || doc[google.picker.Document.URL];
$('#' + inputControl).val(url).change();
}
}).
build();
picker.setVisible(true);
$('.picker.modal-dialog-bg').css('z-index', 1101);
$('.picker.modal-dialog.picker-dialog').css('z-index', 1102);
}
});
});
Would really appreciate help with either of the above problems.