I created a command line tool to help expedite HTML form filling. It uses a brute force approach in that it sends TAB keys to a window and writes info from a config file. This is unstable so I want to refactor it to set form fields using javascript.
I've looked into writing a Firefox addon to do this. I was able to hard-code each field id and write to it from a config file. My issue is I need this functionality in IE.
Is there a way an external application (ie cmd line tool) can write to HTML fields using javascript? I've tried recreating the entire html page with form fields filled in Java. I then try to send this to the normal destination using an HTTP POST. I ran into authentication issues because the forms require a log in.
My other idea is looking into web service tricks. It may be unrelated, I have no idea.
Why not try something like Selenium?
It will stop your reliance on hard coding everything as you have pretty much free reign over the DOM.
Correct me if I'm wrong, though.
You can open an CwebBrowser2 in your C++/C# application and use it as an HTML browser and get all the HTML programatically. You can then parse the HTML with a XML parses and call certain Javascript hooks.
The HTTP Post idea still seems best, if you have trouble with authenticating you just need to mimic that part as well or get the session ID (if a given session is enough for you).
Related
I'm creating an application whose purpose is to fill out a long form, the details of which are saved to a remote database so they can be recalled/edited later. When the form is ready to send out, I need to convert it into a PDF and email it out. Preferably, I would like to do this without using the user's filesystem, but if that's not doable I can work with it.
I've been looking into solutions for converting to PDF such as jsPDF, and for the email functionality it looks like the standard is to use nodemailer. However, I'm not sure how to hook one into the other, especially if I'm going to be avoiding the filesystem.
This is a web app that will be primarily accessed with iPads and phones. My app is built in React, using Apollo/GraphQL for queries and an Express server, obviously all sitting on Node.
Are there any good solutions to this problem? This is a bit of a crunch time problem at this point, and any help would be greatly appreciated. I've been tearing my hair out on this.
I do all of the stuff you need, though it's not the same process, it's essentially the same.
First of all, you will be doing all of this in the backend. The user will submit the form, you will get the data there and work from here. Once you have the data, you will want to create a pdf file. To do so, I use this: https://www.npmjs.com/package/html-pdf It does what it says, works like a charm. In order to use it, you need to have some HTML. I get the HTML using ejs, more specifically, the render function. (you pass your data to the ejs file you want to render, get the html).
Once you have the html, convert it to pdf with that module (save it to some tmp folder, overwriting whatever was there or whatever you want to do), you can use the nodemailer to send the file (check the doc, sending attachments is just a matter of adding the data).
This is what I do. Surely there must be other ways to do the same.
I saw this answer Check if string exists in a web page - Check if string exists in a web page and it works
but what about checking for a string on an external web page that is using Angularjs? Is it still possible to search with php curl or another language should be used.
The simple answer is no, because the problem is that AngularJS is a clientside SPA framework. You would need to parse the JS and run it like a browser would, in order to determine the content of the page. I don't know of any PHP libs that do this.
The alternative without "just php" would be to use a web crawler. There are a couple out there that solve this exact problem. You could then technically use PHP to read the output of the webcrawler program. But then you might not need PHP at all...
If you're going to do any sort of serious page reading I would just use a web crawler/browser to do this. Why reinvent the wheel(browser) when you can just use it?
I wonder if I can use an external form of a webpage in the web from my own website. I am creating an application using the WAMP tool and I need some PHP/JavaScript/whatever script to do it.
Basically I'd have the same exactly form in my website (or at least a similar one) as the one in the external webpage - the goal of this form is only to perform a search. The user would be capable of doing the search and seeing the results posted in my website aswell and all this happening in a hidden way.
I really have no idea how to do this as I searched stackoverflow and the web looking for a solution but it really seems a little bit complicated.
Here's an image to ilustrate more or less what I want:
edit: I don't want any script or code! I Just want to know what is the best way to do this. I will eventually come to a solution (I hope!). Thanks!
The concept is called Web Scraping, you can find more about it in https://en.wikipedia.org/wiki/Web_scraping
As an answer, you can write simple web scraper using PHP CURL.
CURL can submit a remote form for you and get the results of form submit, that results may need to be handled by your script to be displayed in the form you need.
This stackoverflow question and answer may clear it more for you.
How to submit form on an external website and get generated HTML?
I need to create a single html where the person can input text in text fields, then click a button and save the file itself, so he wont lose changes. The idea is similiar to what wysiwyg does to html documents, but I need that to be implemented on the doc itself.
Where do I start from? I can't find anything like that on Google, perhaps I'm searching the wrong therms.
Need something that uses HTML + Javascript, no server side scripting.
JavaScript alone does not have the ability to modify files on your file system. All browsers do this for (good) security reasons. You will not be able to make changes to the html document itself (but according to the comment by Sean below, you might be able to produce a new copy of the document).
You might try using cookies to store the input values (automatically write them and load them when the document opens). There are various jQuery plugins available to aide in reading and writing cookies.
In business or enterprise systems this is usually done with a database, which would require server-side scripting.
I think most of these answers are incorrect. Using the FileSystem API, content is only saved to a sandboxed hidden folder, the user has no control as to where it is saved.
As suggested by Sean Vieira, using TiddlyWiki is a good solution.
However, if you want to customise it, you can make a Flash/JS bridge in which the Flash SWF saves the actual content.
As part of a job I'm doing on a web site I have to copy a few thousand lines of text from several pages of the old site and paste them into the HTML for the new site. The long and painstaking way of going to the old page and copying the many lines of text and then going to my editor and pasting it there line by line is getting really old. I thought of using injected JavaScript to do this but I'm not quite sure where to start. Thanks in advance for any help.
Here are links to a page of the old site and a page of the new site. As you can see in the tables on each page it would take a ton of time to copy it all manually.
Old site: http://temp.delridgelegalformscom.officelive.com/macorporation1.aspx
New Site: http://ezwebsites.us/delridge/macorporation1.html
In order to do this type of work, you need two things: a way of injecting or executing your script on that page, and a good working knowledge of the Document Object Model for the target site.
I highly recommend using the Firefox plugin FireBug, or some equivalent tool on your browser of choice. FireBug lets you execute commands from a JavaScript console which will help. Hopefully the old site does not have a bunch of <FONT>, <OBJECT> or <IFRAME> tags which will make this even more tedious.
Using a library like Prototype or JQuery will also help selecting parts of the website you need. You can submit results using JQuery like this:
$(function() {
snippet = $('#content-id').html;
$.post('http://myserver/page', {content: snippet});
});
A problem you will very likely run into is the "same origination policy" many browsers enforce for JavaScript. So if your JavaScript was loaded from http://myserver as in this example, you would be OK.
Perhaps another route you can take is to use a scripting language like Ruby, Python, or (if you really have patience) VBA. The script can automate the list of pages to scrape and a target location for the information. It can just as easily package it up as a request to the new server if that's how pages get updated. This way you don't have to worry about injecting the JavaScript and hoping all works without problems.
I think you need Grease Monkey http://www.greasespot.net/