I am writing a small application with Qt 4.6 (64-bit Arch Linux, though that shouldn't matter) which lets the user edit a document using a QWebView with contentEditable turned on. However, for some reason embedding an image does not work. Here is a code snippet:
void LeafEditView::onInsertImage()
{
// bring up a dialog, ask for an image
QString imagePath = QFileDialog::getOpenFileName(this,tr("Open Image File"),"/",tr("Images (*.png *.xpm *.jpg)"));
ui->leafEditor->page()->mainFrame()->documentElement().evaluateJavaScript("document.execCommand('insertImage',null,'"+imagePath+"');");
}
The test image does in fact exist and yet absolutely nothing happens. Bold / italics / underline all work fine via JavaScript, just not images. Thoughts?
Check that QWebSettings::AutoLoadImages is enabled.
You could also try:
document.execCommand('insertImage',false,'"+imagePath+"');
Try using relative vs absolute paths to the image.
Last but not least, poke around this sample application -- they are using a similar method of Javascript execCommand(), they do some things in a slightly different way such as using QUrl::fromLocalFile.
Best of luck!
It turns out that WebKit has a policy of not loading resources from the local filesystem without some massaging. In my code, I have a WebKit view which I'm using to edit leaves in a notebook. The following one-liner solved my issue:
ui->leafEditor->page()->mainFrame()->setHtml("<html><head></head><body></body></html>",QUrl("file:///"));
From what I gleaned by lurking around the WebKit mailing list archives, in order to load files from the local filesystem one must set the URI to be file:, and this does the job.
Related
This question is about executing javascript AND HTML in the command line.
**** very important : this is NOT about scrapping. It's about "rendering" or executing the javascript contained in a private .html (not web accessible) and save the result in a new .html with the javascript executed result
CONTEXT : I have private .html (those .html have css, javascript, etc) but "unrendered"... if I see what I mean.
WHAT I WANT TO ACCOMPLISH : in a automated php script => I create a raw (unexecuted) .html => cli with the .html => execute javascript => save the result in a new .html
I TRIED : google-chrome headless -> it can save as PNG or PDF, but unable to save it as .html :(
OTHER ISSUE with google-chrome headless : the option --dump-dom saves the original .html not the executed one...
example
google-chrome
--headless --hide-scrollbars --run-all-compositor-stages-before-draw
--virtual-time-budget=10000 --disable-translate --disable-popup-blocking
--disable-infobars --ignore-certificate-errors --autoplay-policy=no-user-gesture-required
--disable-gpu --dump-dom
"someHTML_with_CSS_AND_JAVASCRIPT.html" > final.html
Other command line solutions like CutyCapt, wkhtmltopdf, etc have the same problem : only save in pdf or jpg or png, I want to save in .html
A web page after post-render can be stored generally in 4 other formats apart from the pre-render html/css/js components
The first two are the more familiar screen rendered image (PNG is best) or encapsulated vectors (PDF is the best and simplest)
The other two are different html format but not always built the same way. This century Firefox has problems (23 year old bug) with saving the post rendered web format but it is more native to Edge (preceded as I.E) & Chrome/ium variants.
One is to encapsulate the render and media in a html5X.zip file as used by zip.ePub3 the other is as a single mhtml(x)/.mht and the easiest way is to right click save as web page. "MHTML was proposed as an open standard, then circulated in a revised edition in 1999"
As described in the bug report, there are extensions for both FireFox and Chrome
One of web extensions that already make this possible is called "Save Page WE". It has been actively maintained. Users can save a complete webpage or selected items. Further, users can save one or more selected webpages (i.e. selected browser tabs). Information bar at top of each saved HTML file is supported as well and can be enabled among settings. Web Extension is available for Firefox - link and Chrome - link.
So saving this page as html
the top left svg Stack Overflow logo will start off as
Content-Transfer-Encoding: quoted-printable
Content-Location: https://cdn.sstatic.net/Img/unified/sprites.svg?v=fcc0ea44ba27
<svg width=3D"189" height=3D"530" fill=3D"none" xmlns=3D"http://www.w3.org/=
2000/svg"><path d=3D"M48 280.8v7.6l8.5 7.6L73 281.2V273l-16.5 14.9-8.5-7.1z=
M22 324v3l4 4 7-6v-4l-7 6" fill=3D"#5EBA7D"/><path d=3D"M8 280.8v7.6l8.6 7.=
6L33 281.2V273l-16.4 14.9" fill=3D"#C9CBCF"/><path d=3D"M45 191h29l-14.4-15=
" fill=3D"#F48024"/><path d=3D"M5 191h29l-14.5-15" fill=3D"#C9CBCF"/><path =
d=3D"M59.6 243L74 228H45l14.6 15zM6.5 322.5L0 329h13" fill=3D"#F48024"/><pa=
th d=3D"M7.5 386.5L0 380v13l7.5-6.5zm47.5 87l-8-6.5v13l8-6.5zm-48.5 0L14 48=
0v-13l-7.5 6.5zm20-84L33 383H20M6.5 341.5L0 348h13M19.5 243L34 228H5M19.5 1=
20l2.9 9.2H32l-7.7 5.6 3 9.2-7.8-5.7-7.8 5.7 3-9.2-7.7-5.6h9.6" fill=3D"#C9=
The old switch for mhtml saving is --save-page-as-mhtml and was proposed several times for --headless removal !
However I could NOT get that to work in Edge. There are open issues where it does not work as expected https://bugs.chromium.org/p/chromium/issues/detail?id=624045&q=save-page-as-mhtml&can=2 thus have to suggest you need a manual emulation. e.g. using a puppeteer.
Personally for small one offs I use sendkeys from Windows.
In Edge (Chrome?) keyboard scripting can save the current file,
TRY it with this page
CTRLSALTTDownDownUpEnterEnter
#CherryDT pointed me to "chromium"
chromium --headless --dump-dom "raw.html" > "rendered.html"
it does What I needed ;)
Alright so this is a weird one, I'm not entirely sure if this is the right SE site, but I think it is because it regards web code/browser compatibility. If not, someone tell me in the comments I'll move it.
So basically, I have my game's source code on github. I also am hosting the game itself on github pages. This game should (I believe) function on Firefox and Chrome browsers. The source code has nothing unique to either browser.
The game runs fine on chrome. However, on Firefox this is not the case. None of the assets (images, sounds) are showing up/working on the github pages link. The weird thing is this though: on my local file system, when I open the html file with FF it runs/renders the assets just fine. Also, when I download the zip of my project and try it w/ FF, it also works fine. Why is this the case?
(Note, if you want to see the problem, click on the github pages link, then click on "Start Game", this will open it up to the game where the problem is occuring)
Edit:
Forgot to mention, the error I get in the FF console is NS_ERROR_NOT_AVAILABLE: it leads to line 421 which is this: g2d.drawImage(playerSprite, spriteLoc[0], spriteLoc[1]); where I draw the image onto the canvas. g2d is supposed to be ctx btw, thats a bad java habbit.
try changing the path of the resources.
you call the sound files, and image files this way:
laserSound = new Audio("resources\\Sounds\\laserblast.wav");
playerSprite.src = "resources\\Sprites\\Sprite.png";
you need to change the path to this:
laserSound = new Audio("resources/Sounds/laserblast.wav");
playerSprite.src = "resources/Sprites/Sprite.png";
that is change this \ to this /
the current way you are getting it, Firefox does not find where you files are at.
also, why dont you put init(); at the bottom of the JS file, its just to make sure, that the JS parser already knows that certain functions you will be calling are defined, like update() and initBackground() (this does not seem to be a problem, but just to be on the safe side.)
I have written a small utility in Excel-VBA that also interacts with Acrobat Javascript in a handful of separate .pdf files.
The code has been tested extensively and runs exactly as intended on my desktop PC. However, I ultimately need to implement this code on a Microsoft Surface platform. When I try to run the same code from an Excel file on a Microsoft Surface, the code balks at any lines utilizing "GetJSObject."
Eg. The following works fine on my PC, but causes an "object or method not supported" error on my Surface.
Set gAPP = CreateObject("AcroExch.App")
Set gPDDOC = CreateObject("AcroExch.PDDoc")
If gPDDoc.Open(pdfFileName) Then Set jso = gPDDOC.GetJSObject
So far, I've been able to find some hints online that GetJSObject doesnt work well in a 64 bit environment and my Surface runs 64 bit Windows 10 and 32 bit Excel.
However, I don't think that this alone can account for the difference in behavior across both machines; my desktop is running 64-bit Windows 7 with 32 bit Excel, and everything works as intended.
Where should I be looking to help discover the source (and solution) of the problem?
EDIT/UPDATE: The getJSObject statement actually works as intended, IF I take the additional step of manually opening a copy of one of the relevant .pdf files in Acrobat prior to running my VBA code. I assume this means that it is somehow the object definitions (e.g. Set gAPP = CreateObject("AcroExch.App")) that are working differently on the Surface relative to my PC--and not the getJSObject command specifically, as originally thought?
So far, it hasnt made much sense to me how/why this could be true (let alone how I could go about resolving the issue).
Not sure if this had been answered yet, however there are two courses of action i'd take for research:
1.
See if you can launch it without the constructor by using:
Set AcroApp = New AcroApp
Rather than
Set AcroApp = CreateObject("AcroExch.App")
2.
Ensure you are using the same version of acrobat, from my research this error occurs from the very first result in Google for the search query:
createobject acroexch.app error 429
You cannot do this with Adobe Reader, you need Adobe Acrobat.
This OLE interface is available with Adobe Acrobat, not Adobe Reader.
I'm doing a couple of things with jQuery in an MTurk HIT, and I'm guessing one of these is the culprit. I have no need to access the surrounding document from the iframe, so if I am, I'd like to know where that's happening and how to stop it!
Otherwise, MTurk may be doing something incorrect (they use the 5-character token & to separate URL arguments in the iframe URL, for example, so they DEFINITELY do incorrect things).
Here are the snippets that might be causing the problem. All of this is from within an iframe that's embedded in the MTurk HIT** (and related) page(s):
I'm embedding my JS in a $(window).load(). As I understand it, I need to use this instead of $(document).ready() because the latter won't wait for my iframe to load. Please correct me if I'm wrong.
I'm also running a RegExp.exec on window.location.href to extract the workerId.
I apologize in advance if this is a duplicate. Indeed - after writing this, SO seems to have a made a good guess at this: Debugging "unsafe javascript attempt to access frame with URL ... ". I'll answer this question if I figure it out before you do.
It'd be great to get a good high-level reference on where to learn about this kind of thing. It doesn't fit naturally into any topic that I know - maybe learn about cross-site scripting so I can avoid it?
** If you don't know, an MTurk HIT is the unit of work for folks doing tasks on MTurk. You can see what they look like pretty quick if you navigate to http://mturk.com and view a HIT.
I've traced the code to the following chunk run within jquery from the inject.js file:
try {
isHiddenIFrame = !isTopWindow && window.frameElement && window.frameElement.style.display === "none";
} catch(e) {}
I had a similar issue running jQuery in MechanicalTurk through Chrome.
The solution for me was to download the jQuery JS files I wanted, then upload them to the secure amazon S3 service.
Then, in my HIT, I called the .js files at their new home at https://s3.amazonaws.com.
Tips on how to make code 'secure' by chrome's standards are here:
http://developer.chrome.com/extensions/contentSecurityPolicy.html
This isn't a direct answer to your question, but our lab has been successful at circumventing (read hack) this problem by asking workers click on a button inside the iframe that opens a separate pop-up window. Within the pop-up window, you're free to use jQuery and any other standard JS resources you want without triggering any of AMT's security alarms. This method has the added benefit of allowing workers to view your task in a full-sized browser window instead of AMT's tiny embedded iframes.
I have been developing a silverlight page using just xaml, javascript and html (I literally only have a .html, .js and .xaml file). The problem is, I just realized that it isn't working in any browser EXCEPT Internet Explorer (7 for sure).
I have too many lines of code to want to add vb.net or visual c code behind and use the html bridge. I just want the xaml mouse events to work directly as before. In other words, when the xaml's MouseLeftButtonDown says "highlightMe" I want that highlightMe function to be a javascript function. But I also want my page to work in any browser.
Now, I've played around with creating a brand new visual studio project with vb.net or visual c.net but the xaml file events seem to point to code behind events. Also, it compiles the silverlight into a .XAP file. The XAP file is actually a .ZIP file with a compiled dll and an appmanifest.xaml.
So, how do I configure my appManifest.xaml to handle a silverlight page that has only javascript and xaml (and an html file pointing to the .XAP as the source). The html part, I THINK I understand. AppManifest is a different story and I definitely need help with that one.
I think it has something to do with creating an app.xaml and page.xaml and using the x:Class value of the main tag.
Since I asked this question I found a page...
http://pagebrooks.com/archive/2009/02/19/custom-loading-screens-in-silverlight.aspx
...that 1) showed people recently using a similar model of .js, .xaml and .html for their silverlight page and 2) someone in the comments recommended using firebug to track down issues with silverlight javascript errors.
This proved to me it's ok to use this model of silverlight and that it should work in other browsers. This also made me go try firebug. Firebug is AWESOME. If you enable the console tab, you can see exactly where the javascript was hanging up. And now that it's working, I can see the result of my gets/posts to google app engine.
Firebug showed that I was using if then else statements in a way that only internet explorer allows. For example,
if (blah == 1) { blah2 = 3}
else { blah2 = 5};
works in every browser, but this doesn't:
if (blah == 1) { blah2 = 3} ;
else { blah2 = 5};
Firefox and chrome and safari all apparently need there to NOT be a ; end statement character between the else and if.
So, for the moment, I appear to have fixed my problem with cross-browser compatibility, but I'd still like to know more about appmanifest.xaml and how to make a .xap file with only javascript. I might need it later.