PrintPreview of generated docs - javascript

I am trying to print a pdf created from code in an asp.net webforms app, but before the actual print I want to show the print preview popup that appears for example when I use javascript's window.print()
Basically, I need the exact same popup to appear, without changing the page I am curently in, but instead of showing in the popup the page I curently am in, I want to show the PDF created.
The problem is I can't find anything that would get me to this result. Maybe I don't know what to look for, so thanks in advance for any advice.

window.open("path to pdf"); or window.location.href = "path to pdf";
this will open your pdf in a new window. I don't know about a print preview but it will allow them to view the generated pdf.

Short answer, no, Print Preview is a common but optional client-side feature and preference that in an uncontrolled environment (the Internet) you do not have programmatic access to.
If you do have a controlled environment then you can just install custom software that you can program against. But since you're asking in the first place I'm assuming this won't be an option for you.
As #jaredlee.exe said, your best best is to pop open a new window but instead of linking directly to the PDF you could try linking to a simple page that has a full-page iframe or object (or possibly embed) that points to your PDF. Then you could bind an onload (or onreadystatechange or domcontentloaded or whatever else) event that fires the print() method for that specific object.
That all said, there's a really big point to understand and that's that web browsers having the ability to render PDFs natively is a relatively new thing. Adobe shipped a plugin for IE (and maybe Netscape) in the 90's and over the years newer browsers like Chrome and Firefox were added. Overtime, however, these programs started to add their own PDF renderers and once they did that they actually disabled Adobe's. On top of that, operating system vendors (who often happened to also be browser vendors) started to add native PDF renderers directly to their operating system. Some people (myself included) believe that all of these renderers pale in comparison to Adobe's reference renderer so we then disable the built-in ones wherever we find them. So for me (and I know I'm weird) all of these options would still, at best, result in an empty window that's trying to print a blank page and a downloaded PDF.
To restate the above, a web browser is most commonly used to view web pages. The moment you switch to PDFs you are no longer in the "web world" but the "PDF world" and you aren't controlling a "web browser" but instead a "PDF renderer". Unfortunately there's no specification for talking to "PDF renderers" out there because the field is still too new.
To restate my restatement, this all might work some or most of the time, but also don't be surprised if there are edge cases that completely fail.

Related

Protected Content - How to make the Right-Click and F12 don't work in your website?

I want to make the Right-Click don't work in my website or give a error that says: Protected Content! The reason I want to do this is because I don't want others to see my Source Code. I know that you can make the Right-Click to not work but I am not pretty sure about F12. If there is no way to make the F12 key to not work is there any way to hide the Source Code form others? I saw a similar website today. If you right click on this website you get this:
F12 works in this website but the Source Code is hidden anyway. How can I archive similar results? Thanks for your time :)
Answering the question overly honesty:
First you must avoid publishing the site on the Internet. Make it available only on your private machine(s) you have total control of. Make sure there are no USB ports exposed to users etc. Also, no internet access of any kind. They may just download some hacker tools this way. If you do not need text input, even better, keyboard can be used to type in some hacker tools as a source code and this way steal your precious sources.
Next make a custom build of a browser. You may want to use tools like Electron instead of generic browsers this way you will end with app that runs only your website and has no developers tools nor address bar nor anything other that may be used to gain access to your precious source.
Install Linux, create new user account with minimal privileges (no write access anywhere) and let it use X without any window manager. Only your electron app with your precious website and no menus that could be used to access some hacker tools like text editor that may reveal your precious source code. Also, configure the account to have complex random password so that users do not start another session in text mode and see your source code.
Remember that hackers may use means like timing attacks, side channels or other hacky means of stealing your code. To prevent that cover walls of the room you store your computer in with a metal grid to make a Faraday cage. Check all people entering and deny them bringing any electronic devices with them. Same for analog photo cameras or paper notebooks. Better safe than sorry: they may reconstruct your site source code based on how it looks like.
Or just accept the hard truth nobody cares about your website source code. There is plenty of places you may copy paste your code from and your website is not the most interesting one. And if you do that to prevent hackers, you have to write secure code (and test/audit it), not to hide it.
Short answer: Browsers, which render your website, are a client-side technology, and there is no way you can control who is going to see or not see your source code.
Long(er) answer:
Browsers download your website, together with it's source code the website onto users computer. Which means they can manipulate it however they see fit. There are some scripts that can ban right click or other types of interactions, but if you try to stop developers from inspecting code (and if they are ispecting, it's a good bet they are developers) they will find a way even if you block f12 or right click. You can always download website, use crawler, open in notepad, etc. etc.
You may want to investigate minifying and/or uglyfying HTML code, but it's no cryptography - again, if someone wants, they will find a way to undo that.
Also, I'm curious, why would you want to do that?
You can do this using window events but still there are ways to read your code.
For example fetching js without execution or disabling js in browser for a moment.
window.addEventListener('keydown', e => {
if (e.key === 'F12') // detect f12
e.preventDefault()
})
window.addEventListener('contextmenu', e => e.preventDefault())

chrome script or CLI to open URL and capture screen

I'm trying to write a script (or a CLI command would also work) that does essentially the following:
open a browser window of a specific size (given in pixels)
open a specific URL in that browser window/tab (either is fine)
do a screen capture of that particular URL and save this capture to a file
push this jpg/png/whatever file somewhere
I can pretty easily write my own script to handle at least #1 and #4, but from the limited reading I've been doing it looks as if the various chrome extensions and/or scripting capabilities might be capable of doing this all in one shot for me (or maybe everything except #4)
I've got a fair bit of generic programming background, but nothing in the chrome extension/script universe, so that part is pretty opaque to me.
It sounds like the same what http://casperjs.org/ do.

Layout problems in CRM for Outlook

I just realized that what is a nice and working layout of a form with a webresource in on-line version, looses some (but not all) of the formatting when accessed via Outlook. It looks ugly and, I also get errors.
It's somehow related to the JavaScript added to the solution. Or, rather, the web resources, I'd say. Any suggestions on how to debug? F12 doesn't show the console when run from Outlook. I haven't done much with that version so any hint might be of help.
Are you able to narrow down your problem to a part of the script? Could you for instance disable and enable parts of the script(s) to see what works and what does not?
Since the layout is also being influenced, I think you are doing some (or a lot of?) DOM manipulation. This page on MSDN states:
HTML DOM manipulation is not supported
But there should not be that much of a problem (heard that one before...) using Outlook: Dynamics CRM 2011 Outlook client and browser rendering
Edit:
Just to prevent people overlooking the link to a related post from the comments: Random JavaScript Errors in CRM 2011 Outlook Client
Although the page you see in the CRM-Outlook is indeed rendered by IE, it's being served from another version of the engine than what is used to browse. During the rendition process it's "picturized" (lacking a better word for it) so what you see originates in a webpage but isn't one.
I don't think there's a way to debug that version. You can only rely that the development you've tested will work as supposed to. Note that there's no connected process of IE run at the same time as the Outlook client.
I'll gladly stand corrected but as far I've tried (and I've tried a lot, a lot), there's no way to get there.

Create 'safe' JavaScript for use on Internet Explorer

http://i.imgur.com/s4ZQI.png (Can't post image because I'm a new user)
Age old question; is there any way to make a piece of JavaScript safe to use on Internet Explorer without having the security warning popup box. The JavaScript I'm using is simply a drop-down sub-menu that appears when you hover over a link.
If it's something to do with the way the JavaScript is coded, I can link if needed.
Thanks
Assuming that your problem is caused by testing pages from your local disk (and not through some really esoteric scripting) either:
Run a web server and test your pages on that
Give your pages the mark of the web
The point being to run them in a security context that allows scripts to execute.

How to build a web crawler to find a specific advert, which is in an iframe loaded by Javascript

I'm trying to find all instances of an advert on a website. The advert is in an iframe which is loaded by javascript (it doesn't appear at all if javascript is turned off). Detecting the advert itself is extremely simple, both the name of the flash file and the target of the href always contain a certain string.
What would be the best "starting point" for achieving this? At the moment I'm considering an Adobe AIR app, which could crawl the site and examine the DOM to find the ad, and would run javascript and load the content of the iframe. The other option I can think of is using Firefox as the platform (using maybe GreaseMonkey or Selenium? I don't really know how to leverage Firefox like this).
Does anyone know of anything suitable to build this, or have any suggestions on using Firefox to do it?
Extra details:
Being CPU intensive isn't really an issue, nor is anything depending on a browser being open. This doesn't need to run on a headless server, it will be running on a powerful desktop box. OS is also not an issue. It would be advantageous if the crawler loaded each page multiple times, as the advert is in rotation. While the crawler does need to execute the javascript and load the content of the iframe, it does not need to be able to display flash files.
An alternative to using a "browser as a crawler" is using HTMLUnit as the page says, it's:
HtmlUnit is a "GUI-Less browser for Java programs". It models HTML documents and provides an API that allows you to invoke pages, fill out forms, click links, etc... just like you do in your "normal" browser.
It has fairly good JavaScript support (which is constantly improving) and is able to work even with quite complex AJAX libraries, simulating either Firefox or Internet Explorer depending on the configuration you want to use.
I think You don't want a crawler. You are going to run it on a single page and not want it to look around the internet through links, right?
If so - You want to find something on the page with javascript on. You then just have to use javascript.
You'll need:
the site :)
correct rights to access its content - use greasemonkey for FF or user scripts in Opera
a code similar to this jQuery sampe:
finding stuff in iframes:
$('iframe').each(function(){
$(this).contents().find('object').each(function(){
if($(this).attr('name').match(/regex/)){
$(this).remove(); //or do whatever You want
}
});
});
caution: accessing iframe contents may differ in browsers and is influenced by time when You run the script
If the ad is only displayed when javascript is enabled, you are going to have a problem, as no crawler is going to be able to read the web page in that matter.
Is there something in the javascript code itself that could be a tipoff to where the add is displayed? If so, maybe you can check that.
I've tried similar stuff before, and I used BeautifulSoup in python, and it worked really well.

Categories