I'm trying to make a script that will search through an entire webpage for email addresses that end in #xyz.com. For example:
$(document).ready(function() {
$("body:contains('#xyz.com')")
.css("text-decoration","underline");
/*$("*:contains('#xyz.com')")
.css("text-decoration","underline"); doesn't work either */
I figured contents(); wouldn't be a better choice over contains(); ...
});
For some reason, it can't seem to detect any email addresses I've hard coded into my page within the paragraphs on the page.
I can't tie the selector down to a specific div since this script will be running on different webpages thus not being able to controll what element the email address is presented in. But... even if I managed to figure that problem out, I still have another problem to deal with...
The email addresses on the webpages will be all random but will all end in #xyz.com. So I'd have to also select anything to left of the # symbol up until it detects a space between characters.
I can only find solutions to how to detect an email from a list or validation on forms etc.
How can I achieve this?
The problem is that your script is running in the background page, not in the page. The background page can't access directly to DOM of pages loaded by chrome.
To do what you want, look for Content Scripts. This allow you to inject scripts into a given page (like the page loaded in the active tab of chrome). This script can perform action on DOM and comunicate with the background page via the Messaging API of chrome.
Here more informations
Related
I am building my google chrome extension with functionality as follows:
Extention will be used only on one specific domain (like http(s)://www.abc.com/*)
Extention will be running(listening) all the time - on any page within the given domain
Upon highlighting(selecting by mouse) an ident of a course, its statistics(grade distribution) will be shown on the same page. The graph will be shown on top of the page content = doesn't matter where exactly, I will adjust that later.
Now, course statistics(graphs) are stored on the same domain, yet different page. It is needed to "go/click through" other two pages - to select the semester and desired course. Then the graph is shown.
Question:
Can I(and how) access these graphs(being on another page yet being on the same domain) and display them on my current page (where I highlighted text)
Notes:
I read through some other questions here like this or this and came to the conclusion it should be possible since it is on the same domain however I could not find any way how to force JS to access content on another webpage.
I have coded the part with selecting the text, capturing selected text and storing it in variable. Now I need a function (set of functions) to access different page, there in HTML find math with my selected text - click/redirect on the/with corresponding link, being redirected on another page, there copy the image and finally display this image on my page.
Ideally, only one tab (where I select ident) will be open. But if its needed new tab with statistics can be open too..
To access the domain described above, you have to be logged in - could be this also a problem?
Although I program unfortunately not in JS, so all this is new to me.
Many thanks to everyone in advance!
I'm playing around with Google Chrome Extensions and wanted to make one where you fill out a form beforehand. Then whenever, a certain URL is opened, it fills in the information you filled out. I can save the information, and track the tab URL with one of Google's packages. However, when the URL is loaded, how can I tell what form to put the saved strings into? I know how to use var.document.getElementById(""), and can see the ids when I inspect element, but since it's not my webpage, I can't link it to my JavaScript file, so it doesn't help. I've seen this been done before but just can't find the right tools. Any guidance to an answer would be appreciated.
I want to grab some data from a page opened in one tab, and paste it into a textarea of another page opened in another browser tab. How can I do this with Javascript and Greasemonkey?
Set both domains in the metadata block so the script will be activated on both pages
Find an unique element in both websites from which you can detect which page you are currently on.
If you are on the page with the table, get the data and put it with GM_setValue in the store. If needed, open the next website by using GM_openInTab.
If the next website gets detected, retrieve the stored value with GM_getValue and paste it into the textarea.
Not this hard over Greasemonkey, even though its necessary to load the textarea-page AFTER the table-page.
Example
// #include http://website1.com/*
// #include http://website2.com/*
// #require http://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js
$(document).ready(function() {
if( $("#divfromsite1").length )
{
GM_setValue("pastetext", $("#gettable").html() );
GM_openInTab("http://website2.com/");
}
else
{
$("#pastetextarea").val( GM_getValue("pastetext","") );
}
});
This is difficult to do. It isn't impossbile, but it is indeed quite difficult.
Cross document messaging is a way of passing messages from one page to another using JavaScript. The first prerequisite for this is that the documents must have the same origin. That means that they must come fromt the same port on the same domain and must share the same protocol. Where it works well is when you have one page nested in an iframe inside another. When this is the case, you can do the following:
get the window object of the nested page, add an event listener to the receiveMessage event.
Use window[name].postMessage(msg, url) to send a message to the other window.
Capture the data property of the message event, which will contain the information that you sent.
What makes your case difficult is that you want to communicate across tabs. I will say that I have no experience in doing XDM across tabs, and personally believe that if you have an application running across multiple tabs that need to interact with one another that maybe you need to review your application design... users might not like you changing things in their browser that they probably cannot immediately see and thus understand.
Anyways, if you want to go ahead with this you need to look at how different browsers give you access to their tabs. For Firefox you might want to start at this post Get window object from tab, and for Chrome you may want to start here How do I get the window object for a specific tab if I have that tab's tabId?.
Given that the question has been edited and the use of Greasemonkey has been added, this may or may not meet your needs, and unfortunately I am not skilled enough in Greasemonkey to give you a GM-based solution. If you need help with a solution using XDM I'll be happy to assist.
I am currently doing some work on a research database where they have decided that they want to be able to share links to articles from the site on social networks (Facebook, Twitter, LinkedIn and Google+).
Preferably this should be done through the share buttons provided by the respective networks. I quickly got the buttons working and displayed correctly on the site by following the implementation instructions from each network.
My problem is a consequence of that the site offers the possiblity to show 1000 (1K) post on a single search result page. This means that when such a page is created it needs to create 1000 share buttons for each social network (effectively 4000).
Sadly this seems to overwhelm the browser as it offers to stop the javascript provided by the social networks and whether you choose to stop it or not - the page ends up in deadlock waiting for a response from the social networks and never finishes the page loading process.
I have an idea that the problem may be that the large number of asynchronous requests means that the browser somehow misses some of the responses and thus ends up waiting forever for a response that will never come.
As mentioned it is only a problem with such a large number of posts, if a page for example displays 100 posts (effectively 400 share buttons) it works perfectly.
While it could be argued that 1000 post on a single page is overkill, limiting the maximum number of displayed post is sadly not an option.
My question therefore is whether any of you know of a way to solve this kind of problem or if my only real option is to create custom share buttons that doesn't need to be created through the javascript provided by the social networks ?
The following references leads to the documentation for each of the share buttons.
Twitter
Facebook
LinkedIn
Google+
For all these buttons, there is a main js file which does the heavy work.
So, for LinkedIn, add the script tag:
<script src="//platform.linkedin.com/in.js" type="text/javascript"></script>
once in the page. And use the below script as a placeholder for your linkedin button whereever you need it. (don't forget to replace the data-url attribute in below script)
<script type="IN/Share" data-url="http://developer.linkedin.com/plugins/share-plugin-generator" data-counter="top"></script>
For Twitter similarly, the below script tag needs to be added once in the page as it's job is to get the main js file and add it to the page.
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
The below script needs to be added multiple times whereever you want. Replace the data-url attribute with your url which needs to be tweeted when you click on it.
Tweet
When you get the code for FB or Google Plus like, you will get a script which needs to be added once and then the code to be added where ever you need.
EDIT:
Based on your comment below: The scripts will surely cause issue because they need to convert each and every placeholder into a good looking 'like' button. Below are few ways to improve the performance:
run these scripts only on page load (i.e., add the main scripts at load time)
using setTimeout or setInterval, work on every 100 placeholders at a time (requires change in main scripts)
Lazy load the init of like buttons. When the user scrolls the page and the like buttons will show up in the page, then initialize the buttons (requires change in main scripts)
Recommended Approach: Keep just one set of like buttons. When user hovers over a search result, then add this set of buttons to that div and change the attributes related to url in the buttons. With this way, only one set of buttons will be shown and won't take time at all to init them.
I am trying to figure out the best way to acompish "unobtrusive" forms for a user (within a web app).
The purpose: keep user on the site by not asking to fill unnecessary form in. Ask for the details as only when such are needed.
The requrements are:
User should provide additional details only when it is required (email to receive notifications, login required for account page, save credit card details when checking out).
User should not leave the current page providing the additional details.
The implementation would be fairly easy if all requests would be AJAX ones. It would be easy to analyse the response (401 or so) and show the appropriate lightbox-form.
I do not see how it can be done "the right way" with plain anchors and form submits as in both cases the user actually leaves the page (by following the link or submitting a form) and there is no way to analyse the response on the client side.
Converting all links and forms to AJAX ones would be just silly.
The closest analog to what I want to achieve is the default Basic Authentication dialog in most of the browser. But obviously that just doesn't fit my requirements.
Any creative suggestions how to do that for non-AJAX requests?
Regards,
Dmytrii.
In a page sense, where "page" refers to what the user sees and not what the URL is, I only can think of following ways to update independent parts in a page with JavaScript (and thus Ajax) switched off:
Frames
Iframes
Using held-open connections there are two more ways to update a page, however these do not work reliably in all cases:
Animated GIF
CSS DIV tags with absolute positioning.
Note that this needs that your Server can keep open a session for each person looking at the page, which can be thousands. If this does not work the only possible workaround is with FRAMEs and automatic refresh, which is somewhat clumsy.
As I think that you do not want to use Frames and you do not want to render animated GIFs, I explain the CSS DIV way:
When you load the page you do not finish loading it. Instead the connection is kept open by the web server and the script handling the connection waits for additional information to arrive. When there is additional data, this is sent to the browser by encapsulating it into additional DIV tags which can overwrite other parts of the page.
Using "style" in the DIV tag and CSS position:absolute these can overwrite other information on the page like a new layer. However you need either position:absolute or must add this data to the end of the page.
How does this work with forms?
Forms usually have a known size so you can put them into IFRAMEs. These IFRAMEs get submitted to the webserver. The script there notifies the other script that new data must be output, so the waiting script renders the response and displays it in the page while the script which took the submit redisplays the form with fresh values only.
How does this work with 404 and anchors?
I don't really know because this must be tested, but here is a hint how I would try to implement this:
We have 2 issues here.
First the URL must not point to other pages but back to a server script again, so the href is under control. This script then notifies the waiting script to update the page accordingly, for example by retrieving the page and sending it to your browser. The script can check for 404 as well.
Second you must hinder the browser to switch the page when clicking on the anchor. This probably involves some clever tricks using CSS, target and server side status codes (like "gone" or redirect to the current page, whatever) to keep the browser from switching the page. I am not completely sure if that works, but if you remember download pages, these show URLs which do not switch the page but have an effect (downloading the file). That's where to start to try to hack browsers not leaving the current page without using JavaScript.
One idea not followed here is not keeping the connection of the page open but the CSS file and send new css information to the browser which then "fills in empty stubs" using the CSS way. But I doubt that this works very well, most browsers probably will parse the CSS only after loading finished, but perhaps I am wrong.
Also note that keeping a connection open never finishes the page loading, so you will see the busy-logo spinning all the time, which is unavoidable with this technique.
Having said this all I doubt you get around JavaScript.
What I wrote here is very difficult to do and therefor usually is not used because it scales badly. And it is a lot more difficult than using JavaScript alone (that's why I explained it).
With proper AJAX it is much more easy to reach your goal. Also note that you do not need to change your page source much, all you need is to add a script which augments the page content such, that for example forms suddenly use AJAX instead of a direct POST with re-rendering the page. Things which cannot be detected easily then need some hints in the tags such that the tag scanner knows how to handle the tag. The good thing then is, that with JavaScript switched off your page still works - however it then "leaves the page".
Normal HTML just was not designed to create application-like web pages like we want to see today. This all was added using JavaScript.
About popup forms
The Basic-Auth-Handler reloads the page after the user enters something into this dialog, only if cancel is hit the current page is displayed.
But there are two ways to present additional query-popups in a page using JavaScript:
The first one is the javascript "prompt", like in following example:
http://de.selfhtml.org/javascript/objekte/anzeige/window_prompt_vor.htm
(Click on the "Hier").
The second one is "JavaScript forms" which are like popups within an HTML-page.
However I consider popups to be far too intrusive and bad design.
Ajax and JavaScript is the easiest way
Unfortunately using JavaScript is never easy, but if you think JavaScript is improper or too difficult, there is no other technique which is easier, that's why JavaScript is used everywhere.
For example your page onload-Script can cycle through all Anchor-Tags and modify them such, that clicking on them invokes a function. This function then must do something clever.
Same is true for Forms. Fields which can be modified (like the user's eMail address) then have two views, on is visible, the other one hidden. The hidden one is a form. Clicking on the eMail address then switches the view (disables the first div and enables the second), such that suddenly instead of the eMail address a text form field is there containing the eMail address. If you click on the "OK" button the button changes the look into a spinner until the data is submitted, then the view switches back to the normal one.
That's the usual way to do it using JavaScript and Ajax. And this involves a lot of programming until it works well.
Sorry for not shortening this post and missing code snippets, I am currently lacking time ;)
Hidden iframe.
Set target attribute of the form to the name of the iframe. use the onload event of the iframe to determine what is the response.
Or, if you really dont like any javascript, don't hide the iframe and instead present it in a creative manner.
CSS to hide an element
#myiframe { position:absolute; left: -999em; display: none; visibility: hidden; }
But normally, display: none is enough. This is just an overkill.