I am very new to javascript so bear with me:
I am trying to replace one quicktime movie with another one, so far i have used this code from apple and it works great
you can see my efforts here: http://www.centurysunstudios.co.uk/test/
please look at the source code (i tried to past the code here but would not let me for some reason; said i could only post one url as a new user? )
the problem is that the replace method apple use works in every browser (on osx and windows) apart from IE. In IE the movies do not replace and i get this message;
Error: document.movie is null or not an object
Apple seem to not have a solution and my javascript is limited
Any help would be greatly appriciated
Thanks
Try this:
<script>
function changeMovie(movieURL){
var embeds = document.getElementsByTagName("embed");
for(var i=0;i<embeds.length;i++){
if(embeds[i].getAttribute("name")=="movie"){
embeds[i].SetURL(movieURL);
}
}
}
</script>
L(o2)
Related
I want to use HtmlUnit (v2.21) to get some search result pages from google. This requires me to click on "people also looked for" link when searching for a person (right side, see example link), which triggers some JavaScript and changes the content of the current page. But this gives me an JavaScript Wrapper Exception (see below).
Clickable example link: https://www.google.de/search?ie=UTF-8&safe=off&q=nicki+minaj
Simple TestCase with errors:
String url = "https://www.google.de/search?ie=UTF-8&safe=off&q=nicki+minaj";
WebClient client = new WebClient(BrowserVersion.BEST_SUPPORTED);
HtmlPage page = client.getPage(url);
HtmlElement link = page.getFirstByXPath("//a[#class='_Zjg']");
HtmlPage newPage = link.click(); //throws exception
this.storeResultFile(newPage.asXml(), "test");
client.close();
Result:
net.sourceforge.htmlunit.corejs.javascript.WrappedException: Wrapped java.lang.NullPointerException
at net.sourceforge.htmlunit.corejs.javascript.Context.throwAsScriptRuntimeEx(Context.java:2053)
at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.doProcessPostponedActions(JavaScriptEngine.java:947)
at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine.processPostponedActions(JavaScriptEngine.java:1012)
at com.gargoylesoftware.htmlunit.html.DomElement.click(DomElement.java:799)
at com.gargoylesoftware.htmlunit.html.DomElement.click(DomElement.java:742)
at com.gargoylesoftware.htmlunit.html.DomElement.click(DomElement.java:689)
I stored the xml of the "page" object and made sure that the XPath expression is valid and has results.
Anybody got any ideas?
Looks like the JavaScript-Engine (based on Rhino) is very easy to upset and quits on some script-issues, where other browsers are still able to run the script.
I dont know if there is a mistake in the scripts from google, but these two lines solved it for me:
JavaScriptEngine engine = client.getJavaScriptEngine();
engine.holdPosponedActions();
Nevertheless, when running multiple htmlunit-objects in multiple threads it is still possible to get accross this error. This is more a workaround than a solution.
i'm trying to migrate from feedly as it is unacceptable (at least to me) that a search query is (fully) enabled only by a pro version.
Anyhow, to export my lengthy list of "saved for later" i found some lovely scripts:
Simple script that exports a users "Saved For Later" list out of Feedly as a JSON string and feedly-to-pocket. where i am instructed to:
You must switch off SSL (http rather than https) or jQuery won't load!
so i though i did by adding (ubuntu 14.04/chrome 40 x64)
--ssl-version-min=tls1
to my /usr/share/applications/google-chrome.desktop file (all lines starting with Exec=). However when i try to run it in the browser console i get
This request has been blocked; the content must be served over HTTPS.
So, any suggestions? (also, excuse me for noobness)
Go to your Feedly "saved" list and scroll down until all articles have loaded.
Open console and paste the following Javascript into it:
function loadJQuery() {
script = document.createElement('script');
script.setAttribute('src', '//code.jquery.com/jquery-2.1.3.js');
script.setAttribute('type', 'text/javascript');
script.onload = loadSaveAs;
document.getElementsByTagName('head')[0].appendChild(script);
}
function loadSaveAs() {
saveAsScript = document.createElement('script');
saveAsScript.setAttribute('src', 'https://cdn.rawgit.com/eligrey/FileSaver.js/5733e40e5af936eb3f48554cf6a8a7075d71d18a/FileSaver.js');
saveAsScript.setAttribute('type', 'text/javascript');
saveAsScript.onload = saveToFile;
document.getElementsByTagName('head')[0].appendChild(saveAsScript);
}
function saveToFile() {
// Loop through the DOM, grabbing the information from each bookmark
map = jQuery(".entry.quicklisted").map(function(i, el) {
var $el = jQuery(el);
var regex = /Published:(.*)(.*)/i;
return {
title: $el.attr("data-title"),
url: $el.attr("data-alternate-link"),
summary: $el.find(".summary")[0].innerHTML,
time: regex.exec($el.find("span.ago").attr("title"))[1]
};
}).get(); // Convert jQuery object into an array
// Convert to a nicely indented JSON string
json = JSON.stringify(map, undefined, 2);
var blob = new Blob([json], {type: "text/plain;charset=utf-8"});
saveAs(blob, "FeedlySavedForLater" + Date.now().toString() + ".txt");
}
loadJQuery()
Source: Feedly-Export-Save4Later
Not javascript but here is how I saved a html page with all the links and excerpts...
Open the saved pages in feedly in chrome
scroll down so they are all there
inspect any element (the top article is a good choice) so it opens the generated html
find the div id="section0_column0" node
right-click & copy it
paste into Notepad++
this html is untidy so carry on...
Do a Regex find & replace
find: (?s)<div id=.+?_main.+?>.+?(<a href=")(.+?)(").+?sans-serif">(.+?)</span>.+?</div>.+?</div>.+?</div>
replace: <div>$1$2$3>$2</a></div> <div> $4<br /> <br /></div>
save the html page.
open it in Chrome
Posted the question in the jquery forum and the solution was rather simple (remove http from attribute string)
line 34 should be
script.setAttribute('src', '//code.jquery.com/jquery-latest.min.js');
So to close the loop - for a full searchable/archived list of links not only by title/url but context also(!) you can:
Follow the instructions in https://github.com/ShockwaveNN/feedly-to-pocket (with the correction suggested by kind stranger jakecigar and you also have to register a pocket app (obtain consumer key) for the ruby script to work)
Export html list from your pocket account
Import pocket list to a Kifi library
and at last feedly-free with my personal search engine
I know I'm a bit late to the party but Ive been hunting around for a few days to find a reasonably simple solution. None of which have been listed clearly or concisely on stack overflow or elsewhere on the web. I have in fact found a much easier way to do this.
Use this java script from this Gist just as it instructs https://gist.github.com/ShockwaveNN/a0baf2ca26d1711f10e2 (Note this is referenced above and found through the link #gep shared in step one)
Once the JS as completed running it will download a text file. (It does still run successfully and on large numbers, I just exported almost 2500 articles)
Create a blank test.json in SublimeText.
Copy all entries from your exported text file into this json file
Weirdly it does seem you need to copy and past as I tried just renaming the text file and when I did that I received errors on the next step
Make sure you are signed into pocket
Go here: https://getpocket.com/import/springpad
Select your newly created test.json
Upload
Note: On large uploads the import page fails to refresh (this did not seem to be an issue as all my articles did make it into my account)
This allows you to directly upload json into your pocket account. Thus no more messing around with random supposed other fixes. I hope this make it a lot easier for everyone in the future.
thanks for looking at my question. I am working on a HTML5 audio related project. And now I meet a question.
What I want to do is assigning an audio.src to another audio.src. Actually, it works well in my beginning demo. But it does not work in my current project. The original audio cannot be played. And I console out all it's loading procedure and figured out the problem happening in durationchange. But I have no idea what is wrong with it since my logic is very similar to my beginning demo.
Hopefully, someone here could help me find out what is wrong with my code. The following is my code:
// the original audio is Glogal.audio
var segs = $('.cutter-room .container').find('.seg-container');
var audio_self = "<audio id='player_0'>";
// add one more segment for the new cut part
$('.cutter-room').append(audio_self);
$('.cutter-room .container').append(cut.audio_seg); // audio_seg is the 'clothes' of audio tag, a GUI
// new_seg is the audio tag which I want to assign to
var new_seg = document.getElementById("player_0");
var temp = GLOBAL.audio.src;
new_seg.src = GLOBAL.audio.src; // if I comment this one, the original will be fine
And the following is my testing code, when the new <audio> inserted in DOM successfully, if I try to play the original audio, its durationchange will not be called:
/*for testing*/
GLOBAL.audio.addEventListener("loadstart", function(){
console.log('start loading');
});
GLOBAL.audio.addEventListener("durationchange", function(){
console.log("change duration");
});
/*end testing*/
By the way, I am sure that the music file is correct. Thanks again!
Update 2013/11/28
Here is the jsFilddle link. I am sorry that I don't know which music link should I put in the src so I just put my local path. The problem shown in jsFilddle is a little bit different from what I said above. In jsFilddle, there is nothing wrong with the original audio but the second one cannot play.
I found that it I just open the .html page, no server supported. There will be nothing wrong. But if I run it on a server locally, the duraionchange will not response. So does it mean that the problem happens on the server side, but not the js?
But it is unreasonable that an audio source cannot be assigned to another audio source with a running server. They are paths but essentially, they are still Strings, aren't they?
The thing is that browsers don't like having the several different players pointing to the very same mp3 on the same page.
So the trick can be to alter the url, to prevent caching, for example:
assign.src = original.src+"?foo="+(new Date().getTime());
jsfiddle
i'm writing an greasemonkey script for somebody else. he is a moderator and i am not. and the script will help him do some moderating things.
now the script works for me. as far as it can work for me.(as i am not a mod)
but even those things that work for me are not working for him..
i checked his version of greasemonkey plugin and firefox and he is up to date.
only thing that's really different is that i'm on a mac and he is pc, but i wouldn't think that would be any problem.
this is one of the functions that is not working for him. he does gets the first and third GM_log message. but not the second one ("got some(1) ..").
kmmh.trackNames = function(){
GM_log("starting to get names from the first "+kmmh.topAmount+" page(s) from leaderboard.");
kmmh.leaderboardlist = [];
for (var p=1; p<=(kmmh.topAmount); p++){
var page = "http://www.somegamesite.com/leaderboard?page="+ p;
var boardHTML = "";
dojo.xhrGet({
url: page,
sync: true,
load: function(response){
boardHTML = response;
GM_log("got some (1) => "+boardHTML.length);
},
handleAs: "text"
});
GM_log("got some (2) => "+boardHTML.length);
//create dummy div and place leaderboard html in there
var dummy = dojo.create('div', { innerHTML: boardHTML });
//search through it
var searchN = dojo.query('.notcurrent', dummy).forEach(function(node,index){
if(index >= 10){
kmmh.leaderboardlist.push(node.textContent); // add names to array
}
});
}
GM_log("all names from "+ kmmh.topAmount +" page(s) of leaderboard ==> "+ kmmh.leaderboardlist);
does anyone have any idea what could be causing this ??
EDIT: i know i had to write according to what he would see on his mod screen. so i asked him to copy paste source of pages and so on. and besides that, this part of the script is not depending on being a mod or not.
i got everything else working for him. just this function still doesn't on neither of his pc's.
EDIT2 (changed question): OK. so after some more trial and error, i got it to work, but it's still weird.
when i removed the www-part of the url thats being use in the dojo.xhrGet() i got the finally the same error he got. so i had him add www to his and now it works.
the odd thing is he now uses a script with the url containing "www" and i'm using a script with an url without "www"...
so for me:
var page = "http://somegamesite.com/leaderboard?page="+ p;
and for him:
var page = "http://www.somegamesite.com/leaderboard?page="+ p;
Why don't you have him try logging into an account that is not a moderator account so that you eliminate one of your variables from your problem space.
It's possible that the DOM of the page is different for a moderator than for a regular user. If you're making assumptions about the page as a regular user that are not true as a moderator, that could cause problems.
I suspect that to fix it, you may need access to a moderator account so you can more easily replicate the behavior.
ooops. it seemed that the url of this gamesite is accessible as www.gamesite.com as well as gamesite.com (without the www.part). this caused the problem.
sorry to bother you'all.
i go hide in shame now...
I'm doing a web where I heavily use AJAX requests to
a XML service. In fact, my web is a front-end with almost
no server whatsoever and uses AJAX to communicate with
the back-end.
Everything was going fine (I developed and tested in Ubuntu 9.04
and Firefox 3.0 as a browser).
One day I decided to see how my web did in IE8...
horror!
Nothing was working as it marvelously did in Firefox.
To be more specific, the Request.HTML's were not working.
As I said, my web relied heavily on that, so nothing worked.
I spent a day trying to get something running but I had no luck..
The only conclusion to which I arrived was that the XML was
incorrectly parsed
(I hope I'm in mistake). Let's get to the code:
var req = new Request.HTML({
url: 'service/Catalog.groovy',
onSuccess: function(responseTree, responseElements) {
var catz = responseElements.filter('category');
catz.each(function(cat){
// cat = $(cat);
var cat_id = cat.get('id');
var subcategory = cat.getElement('subcategory');
alert(cat_id);
alert(cat.get('html'));
alert(subcategory.get('html'));
}
},
onFailure: function(){...}
});
for example, that piece of code.
In firefox, it worked perfectly. It alerted an ID (for example, 7),
then it showed the contents of the category element, for example:
<subcategory id='1'>
<category_id>7</category_id>
<code>ACTIO</code>
<name>Action</name>
</subcategory>
and then it showed the contents of some inner element, in this case:
<category_id>7</category_id>
<code>ACTIO</code>
<name>Action</name>
In IE8, the first alert worked OK (alerted 7)
but the next alert (alert(cat.get('html'));) gave an empty string
and the last threw an Exception... it said something about subcategory
beeing null.
What I concluded with this all is that the elements where parsed
correctly
in Firefox, but in IE8 I only got the tags and the attributes OK,
everything else
was completely wrong (in fact, missing). I mean, the inner content of
all the
elements of the response where gone!
Other fact you could use: this code:
alert(cat.get('tag')); resulted in
Firefox: category
IE8: /category <-----------(?)
hmm what else...
oh yeah... the line you see commented above (cat = $(cat);) was
something
I tried to do to fix this. I read in the mootools Docs that IE needed
to explicitly call
the $ function on elements to get all the Element-magic ... but this
didn't fix anything.
I was so desperate... I even fiddled around with mootools.js code
OK, so...
What I want you, dear mootool-pro's is to help me solve this problem,
for I REALLY need the web to function in IE8, and in fact I chose
mootools to forget about compatibility problems...
ps: if something is not clear, please ASK! I'd appreciate any help :D
I had a similar issue like this sometime ago using jQuery. The problem was that, in IE, the incoming response data needed to be handled by the Microsoft.XMLDOM ActiveX object.
The general steps are to:
Instantiate the ActiveX object.
var oXmlDoc = new ActiveXObject("Microsoft.XMLDOM");
Pass it the incoming response data and load it.
oXmlDoc.loadXML(sXmlResponseData);
Parse it as needed.
You can check out the full resolution here.