Hey so I'm making some software that will have 4 different steps.
load
edit item 1
edit item 2
submit to an online storage
Since item 1 and item 2 are large items to be edited I wanted a separate page.
I also wanted a separate page to submit both items so they know it's being submitted and user receives further instructions.
I used to just let it just redirect the user to the next page, but I found it took 2-5 seconds to load each page, and I rather have it a fluid process.
Since Im using electron.js I am using require to load the JS part of the webpages.
But cant find a way to load the HTML and load the JS parts that effect the HTML (JS needs the webpage's DOM).
I've attempted using Jquery but all I accomplished was getting the html in a string format.
Been trying to find a way to use ajax, but so far everywhere I've looked its for including the html into a different html page.
I am new to both Jquery and ajax and just looked at them today. So maybe I am missing something about them. But cannot find any tutorials/documentary for loading another page.
Related
Hi i got a question about pagination. First of all, im new at html and javascript.
Ok, here is my doubt. Im building a website from cero. Im not using any web editor.
I filled the index.html with enough content (based on what i think is enough).
Now i want to add a new page with new content, and this page will be my new index.html, and the first index.html will be page-2.html (for example).
And I will be adding a new page like every 3 days, so what happens when I got more than 30 or 40 pages?
I know how to do a pagination, but i want to know if there`s a way to have a pagination without having to change name of pages every time?
Like a dynamic pagination or something like that. I dont know how to do it. I have been searching but I found nothing.
-Is there a way to do it with javascript?
-I donĀ“t have knowledge about PHP.
-Thanks for your answer in advance.
Consider the datatables.net library. It paginates for you: https://datatables.net/reference/api/
Only javascript solution :
You don't have to rename pages nor having multiple pages.
What you need to do is create a sinlge long pages with content separated liek this for exemple
<article>
Content 1
</article>
<article>
Content 2
</article>
...
And then simply do a pagination since you know how to do that.
And display element like, first of pagination show article 1 to 5
and display:none the other etc...
However this is clearly not a good idea mostly due to time of loading. The way you should do is to use php and sql to store your content in database and display it calling the server (with php).
I have an html site with a page of info for each county in the US. I want to convert this into a new wordpress site. I can do this one by one but my issue comes when I have mass changes to affiliate code or common text. I would have to got to each page and manually change it. but with over 3000 pages it would be way to time consuming. I dont want to use Iframes but would like to know if there is a way to call the html pages into the wordpress page that makes sense seo wise.
I am open to creating a page for each county or have one page with text or buttons on it with each county listed and when clicked will insert the info below. I know alot about static html coding but am new to php.
If you dont want Iframes, I think there only remain two options. I don't know if they will work in WordPress though.
1. PHP Include
With the very simple PHP include() statement, you can include the old html files in your new website. If you have a HTML-file for example, name your file yourname.php and add this in the position you want your old page to appear:
<?php include(path_to_old_page/name.html); ?>
This will include the full old page, but the file needs to be on the same server.
2. AJAX
With JavaScript you can perform XHTTP-requests to load files from the server. This is easiest when using jQuery. Here you can use the $(selector).load(path_to_old_page/name.html) statement. This will load the file in the HTML elements to which the selector applies.
(The selector works the same as CSS selectors, see the w3schools page for more)
This will also include the full old page, when it is on the same server
You can have your static pages in WordPress as well. Like if you want to create a new county named "example" you can create new WordPress page named "example" by entering title " example" .... now come to content. Just copy page content (only "example" county related html code from your static website) and place that code inside newly created WordPress "example" page. Make sure you add this html content inside 'text' tab in editor. Your page will be created with all your existing data ... now you can view this page and can use this page's URL where ever you want.
I have a page with a lots of javascript. However, the page once rendered remains static, there are no moving things or special effects, etc... It should be possible to render the same HTML without any javascript at all using only the plain HTML and CSS. This is exactly what I want - I would like to get a no javascript version of the particular page. Surely, I do not expect any dynamic behavior, so I am OK if buttons are dead, for example. I just want them rendered.
Now, I do not want an image. It needs to be an HTML with CSS, may be embedded with the HTML, which is fine too.
How can I do it?
EDIT
I am sorry, but I must have not been clear. My web site works with javascript and will not work without it. I do not want to check if it works without, I know it will not and I really do not care about it. This is not what I am asking. I am asking about a specific page, which I want to grab as pure HTML + CSS. The fact that its dynamic nature is lost is of no importance.
EDIT2
There is a suggestion to gram the HTML from the DOM inspector. This is what I did the first thing - in Chrome development utils copied as HTML the root html element and saved it to a file. Of course, this does not work, because it continues to reference the CSS files on the web. I guess I should have mentioned that I want it to work from the file system.
Next was to save the page as complete with all the environment using some kind of the Save menu (browser dependent). It saves the page and all the related files forming a closure, which can be open from the file system. But the html has to be manually cleaned up of all the javascript - tedious and error prone.
EDIT3
I seem to keep forgetting things. Images should be preserved, of course.
I have to do a similar task on a semi-regular basis. As yet I haven't found an automated method, but here's my workflow:
Open the page in Google Chrome (I imagine FireFox also has the relevant tools);
"Save Page As" (complete page), rename the html page to something nicer, delete any .js scripts which got downloaded, move everything into a single folder;
On the original page, open the Elements tab (DOM inspector), find and delete any tags which I know cause problems (Facebook "like" buttons for example) (I also try to delete script tags at this stage because it's easier) and copy as HTML (right-click the <html> tag. Paste this into (replace) the downloaded HTML file (remember to keep the DOCTYPE which doesn't get copied;
Search all HTML files for any remaining script sections and delete (also delete any noscript content), and search for on (that's with a space at the start but StackOverflow won't render it) to remove handlers (onload, onclick, etc);
Search for images (src=, url(), find common patterns in image filenames and use regular expressions to replace them globally. So for example src="/images/myimage.png" => |/images/||. This needs to be applied to all HTML and CSS files. Also make sure the CSS files have the correct path (href). While doing this I usually replace all href (links) with #;
Finally open the converted page in a browser (actually I tend to do this early on so that I can see if any change I make causes it to break), use the Console tab to check for 404 errors (images that didn't get downloaded or had a different name) and the Network tab to check if anything is still being loaded from the online version;
For any files which didn't get downloaded I go back to the original page and use the Resources tab to find them and download manually;
(Optional) Cull any content which isn't needed (tracker images/iframes, unused CSS, etc).
It's a big job. I'd love a tool which automated all that, but so far I haven't found one. The pages I download are quite badly made (shops) which have a lot of unusual code, so that's why there are so many steps. You might not need to follow every step.
I have a huge(around 20mB) html page which is nothing but pure text. It is a log file for some code running on a server. Now, I am trying to write a chrome plugin which automatically parses this page when someone opens it and adds appropriate links according to my need at certain places.
The page looks like this
<html><head></head><body><pre> 20mB of pure text </pre></body></html>
So, two questions, second dependent on first, which would help me.
(I am using pure javascript till now. No libraries yet.)
1) How do I parse the page?
2) There is some information in the first 3-4 lines. How do I easily get those first few lines and get the data out of it ( if parsing the whole page is not going to be easy)?
What are you trying to parse the page for, are you creating a summary?
for Starters, you can get the first 4 lines by adding an id to the pre tag and doing this:
var first4Lines = document.getElementById("theIdTagOfThePre").innerHTML.split("\n",4);
if that didnt work right you have to switch the '\n' to a '\r\n'.
I have a classifieds website, and the index.html is just going to be a simle form, which uses javascript alot to populate drop lists etc...
I have a menu also, put into a div container, but is this enough?
I mean, I have no content in index.html (almost), but a search form, which submits to a search results page, where all the content is.
So I am worried google might not find suitable sitelinks for my site?
Anybody know if I need to add something to the links in the index.html, which google might use for sitelinks? title tags etc...?
Thanks
Instead of changing your site around you can just create a good sitemap.xml file. That is of course if you're using GET for transferring data to your processing page. I would create a dynamic sitemap.xml page that is based on the form data that your processing page can read.
http://sitemaps.org/
http://www.smart-it-consulting.com/article.htm?node=133&page=37