I have a group of three e2e tests which I am converting from Cypress 8.7 to 10.3 with the end goal of being able to take advantage of Cypress' component testing.
The tests all work both as a group and as individual specs (via --spec <path>) in 8.7.
After converting to 10.3, I have a spec which calls a common command which clicks a button (which the extension adds to a page), which fires its listener and calls chrome.runtime.sendMessage with a message to open the extension UI. In this particular case, the content script is handling its own message (which is technically redundant), but since there are other cases which communicate with the background script, we use the same sendMessage call for consistency (this works every time in the actual Chrome environment). To facilitate this, the background script has a listener which effectively sends the message back to the sending tab for messages it does not directly handle.
Spec 1: always works
Spec 2: always works when run alone with --spec, and never works when all the tests are run together
Spec 3: same as spec 2
I do not believe it is a race condition, as no amount of cy.wait() has been successful. I can verify that the portion of the content script registering the event with chrome.runtime.onMessage.addListener runs at the start of each spec (both alone and all together). However, it never runs when clicking the button (even when paused and manually clicking).
I have the extension manifest setup for the tests to only match the URLs being visited by cypress and have that combined with all_frames: true, resulting in the extension button only showing in the content window and not the overall Cypress chrome window. Logging document at both the listener registration and the click listener logs the same document, so I don't think it is a cross-frame issue.
Based on the available logging from the background script, my conclusion is that it effectively stops running (breakpoints at the beginning of its message handler no longer fire). However, there's nothing to indicate why that might be.
If I place all the tests from those other specs into one spec, that call works as expected (although the tests still fail, presumably due to some other state which is cleared between specs). Is there something I can do to find out why the background script stops running between specs in Cypress 10.3?
One final note is that there are some uncaught and ignored errors in the background script. I am working on cleaning those up, but these tests have always worked in the past despite that (and work when run each spec is run alone), so I'm at a loss.
Related
I came across a website that runs this code:
function check(){console.clear();before = new Date().getTime();…
^^^^^^^^^^^^^^^
on load, discarding valuable console messages. How can I make Firefox
ignore console.clear() globally?
I wonder why that even exists in the first place. It should not be
possible for a website to delete potentially relevant debugging output.
You can solve this in two ways,
First, you can write a firefox extension which executes a javascript on page load and assigns empty function to console.clear. So, it doesn't throw any error if its called.
console.clear = () => {}
References for building extension to run on page load
Chrome Extension: Make it run every page load
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Modify_a_web_page
Secondly, you can load the page once and open devtools and goto sources and search for console.clear and add breakpoints every where its called and reload the page. The code execution will stop when the console.clear is called for the first time and again you can goto console and assign console.clear with empty function and override.
Reference for Using BreakPoints in Firefox
https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_to/Set_a_breakpoint
Since you're asking about Firefox, if you don't want to write your own extension, you can use the one that already exists:
Disallow Console Clear Firefox Addon
It is right to hijack the console.clear on page load.
Here just a tip/record.
For some sites, there would be no explicit word console.clear in the sources. (And sadly, currently Firefox's Preserve Log option might still not be as powerful as Chrome's)
But the hijack might still work even!
BTW it might happen that "directly reassigning console.clear in console" not works.
So just try it.
I've been making a functional tests using selenium-webdriver using yadda library. The problem it's that in my different environments the same test suite working different. Example:
On tests, the result it's different based on the environment that I entry.
Local localhost:5000
Open my search site
․ when i go to my site: 2169ms
․ when i write a text on the search input: 21ms
․ when i click the search button: 130ms
․ then i get the results page.": 46ms
Staging mystaging.domain.com
Open my search site:
StaleElementReferenceError: {"errorMessage":"Element is no longer attached to the DOM","request":{"headers":{"Accept":"application/json; charset=utf-8","Connection":"close","Content-Length":"2"
Production www.domain.com
Open my search site
․ when i go to my site: 2169ms
․ when i write a text on the search input: 21ms
․ when i click the search button: 130ms
․ then i get the results page.": 46ms
At this time, only the staging tests are failing, but in other situations when the internet connection it's slow, the tests fails in production but pass in staging.
The main problem it's that the browser doesn't have the DOM ready for the test and they doesn't find the element required for the test.
My approach for trying to solve this, it's wait that appear the root element of my page like this:
return driver.wait(() => driver.isElementPresent(By.css(".my__homepage")), 50000);
But this isn't enough for me, because the test suites are still failing randomly.So, my question it's:
Which could be a best approach for run in different environments the tests suite dealing with the elements that isn't ready on the browser ?
I am showing you a simple C# code which I did for my work in IE browser but it can work similarly for other browsers and languages.
private static WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(120)); //max driver wait timeout 120 seconds
try
{
//waiting for document to be ready
IJavaScriptExecutor jsExecutor = (IJavaScriptExecutor)driver;
wait.Until(sc => jsExecutor.ExecuteScript("return document.readyState").Equals("complete"));
//waiting for an element.
//use 'ExpectedConditions.ElementToBeClickable' as in some cases element gets loaded but might not be still ready to be clicked
wait.Until(ExpectedConditions.ElementToBeClickable(By.Id("elementID")));
}
catch (Exception ex)
{
}
Basically I am putting a wait for the document to be ready using JavaScript executor
Also I am putting an implicit wait for each element that I would access. I am using expected condition as ElementToBeClickable because sometime ElementIsPresent does not mean element can be accessed as per your needs.
Note: Addiotinally you might also have to check for other element properties (like enabled/disabled, etc) depending on your needs.
I have a bunch of functions that need to be called on $(window).on('load' ...). Occasionally, the site hangs indefinitely while loading. There are a bunch of embeds and other pieces of media being pulled in from various APIs.
Is it possible to detect what is still pending without attaching an event listener to every resource?
Edit for clarification:
#PamBlam's comment below was more tuned in to the problem -- I want to be able to do this with javascript, so it could happen client side while my users are browsing.
Specifically, I'd like to be able to identify pending requests and get any relevant details, and send a note to an error logger (such as sentry) to see what specific resources are problems for users on the live site. Perhaps the only solution would be to create a new loadResource function (as suggested in some answers) that compiles these details and, after a long timeout, sends a note to the logger if it still hasn't finished. But, this seems like overkill. Also some of these resources are <iframe>s that are included in the HTML, so more work to add that in.
What I was hoping for - and I'm guessing that this doesn't exist, as I assume javascript doesn't have permission to see what's happening on the browser level - was something that could, after a long time out, essentially look at the Network tab of dev tools and send a report of what is still pending.
One of the best ways to debug JavaScript is Chrome DevTools(while I am a big advocate of Firefox, in this case Chrome is just mind blowing). Use debug breakpoints and network to the best of your capabilities.
Appending the link for referral
https://developers.google.com/web/tools/chrome-devtools/
Count how many resources are loading, and decrement the count when each is finished. When the count is zero all resources are done.
var resourcesPending = 0;
// Load some resources
resourcesPending++;
loadAResource(function(){
resourcesPending--;
if(!resourcesPending) allResourcesLoaded();
});
resourcesPending++;
loadAResource(function(){
resourcesPending--;
if(!resourcesPending) allResourcesLoaded();
});
// etc..
How can I report JavaScript errors that occur during test execution using Intern? Basically, if there are any JavaScript errors on the page (even as part of things that aren't explicitly tested) I want to know.
Background
I'm just getting started with Intern and testing in general and I'm trying to test all major pages on my site in all browsers because I just changed all our JavaScript to load via require.js. While it looks good in Chrome, I've had issues with require.js and random browsers in the past so I wanted to automate everything. The most likely issue that will arise is that some random JS will fail to execute due to asynchronous loading and load of an expected global. Since there are no current tests setup, I basically want to start by running a 'test' go to through all major pages and report any JavaScript errors.
In order to report uncaught errors, you need to hook the window.onerror method of the page. This is possible, but the page load will need to be finished before you add the hook, which means that any errors that occur before/during the page load (or that occur while the page unloads) simply cannot be caught and reported. It also means if you perform an action that moves to a new page (like a form submission), you will need to make sure you retrieve the list of errors before you perform the action that causes navigation, and reconfigure the window.onerror handler after you get to the new page.
To perform such reporting with a functional test, your test would end up looking something like this:
return this.remote
.get('http://example.com')
.execute(function () {
window.__internErrors__ = [];
window.onerror = function () {
__internErrors__.push(Array.prototype.slice.call(arguments, 0));
};
})
// ... interact with the page ...
.execute(function () {
return window.__internErrors__;
})
.then(function (errors) {
// read `errors` array to get list of errors
});
Note that (as of August 2014) errors from window.onerror in all browsers except recent versions of Chrome provide only the message, script source, line number, and (sometimes) column number, so this information would only be useful to say “this action caused an error, go do it manually to get a stack trace”.
During unit tests, Intern already tries to automatically catch any unhandled errors and treats them as fatal errors that halt the system (since you should never have code that generates this kind of unhandled error).
We recently started using SVN Keywords to automatically append the current revision number to all our <script src="..."> includes (so it looks like this: <script language="javascript" src="some/javascript.js?v=$Revision: 1234 $"> </script>). This way each time we push a new copy of the code to production, user caches won't cause users to still be using old script revisions.
It works great, except for IE6. For some reason, IE6 sporadically acts as though some of those files didn't exist. We may get weird error statements like "Unterminated String Literal on line 1234," but if you try to attach a debugger process to it, it won't halt on this line (if you say "Yes" to the debugger prompt, nothing happens, and page execution continues). A log entry for it shows up in IIS logs, indicating the user is definitely receiving the file (status code 200, with the appropriate amount of bytes transferred).
It also only seems to happen when the pages are served over https, not over standard http. To further compound things, it doesn't necessarily happen all the time; you might refresh a page 5 times and everything works, then you might refresh it 20 more times and it fails every time. For most users it seems to always work or else to always fail. It is even unpredictable when you have multiple users in a corporate environment whose security and cache settings are forcibly identical.
Any thoughts or suggestions would be greatly appreciated, this has been driving me crazy for weeks.
Check your log with fiddler2 to make sure the browser request the page, and do not use the cache instead. Also check the URL of the JS script and the header returned.
Are you using GZip? There has been issues reported with it.
I would suggest testing using Internet Explorer Application Compatibility VPC Image. That way, you can do your tests with a 100% IE6, and not one of those plugin that claims to simulate IE6 inside another browser.
I think this is a very clever idea. However, I think the issue could be related to the spaces in the url. Technically, the url should have the spaces encoded.
See if you can customize the keywords in SVN to generate a revision number without special characters.