I recently ran into the problem related to html document validation. It seems to me that Chrome is pretty clever and can fix most of the mistakes.
In my case I receive raw html file with mistakes and I need to load it on Sumsung TV powered by Tizen 2.4. Unfortunately it doesn't provide the same features as Chrome so I need to fix documents by myself.
What do you think about html validation with help of javascript (My app is written in js).
Download html page and save it
1.2. Download all the related files (css,js,images) and fix links
fix all the problems (use some library, or may be there are some
good validators, but it is better to do offline)
Open document
You can use a linter; a quick search on Github showed up some JavaScript-powered HTML-linters:
htmllint: for HTML5
Bootling: for Bootstrap
Related
Is it possible to disable JS validation in Domino Designer 8.5.3?
I'm accessing a database design where some 3rd party JS libraries (for example the Bootstrap min JS lib: bootstrap.min.js) have been installed within Code/Script Libraries rather than in the Resources/Files section.
The problem that this creates is that the built in JS validator now displays lots of JS errors in the Problems window whenever I'm accessing this application. Not the biggest deal but it makes seeing actual errors/warnings a bit more difficult to find.
I've tried enabling project specific settings and disabling the various JS validators in the Validation section but none seem to have any effect.
I've seen people mention that it's possible to disable this validation in standalone Eclipse but I can't seem to get anything to work in Domino Designer.
Any thoughts welcome.
To my knowledge - there's no way to disable the validations except to move the JS libraries to Resources / Files.
For CSS files not even that is enough to get out of the crappy editor "enhancements".
You need change the extension for the CSS file to something else.
Don't forget to add the proper "Web Properties / Mime Type".
Once you've skipped the "help" from Domino Designer you should see a nice performance improvement...
I'm looking for a pdf viewer that can load a pdf asynchronously. This is a big need in our site since the PDF documents have at least 50 pages.
I've already looked into pdf.js by mozilla but I can't seem to make it work (I think the examples are broken)
Any help would greatly be appreciated! Thanks!
UPDATED:
Got it working but my solution was messy. I integrated the web/viewer codebase of the pdf.js repo in my site. So whenever I need to view a pdf, I just used an iframe with a source #{pdf_viewer_path}?file=#{file_path}.
Is there a better solution than this?
NOTE: The above method does not work in a production setting. There are some js errors showing up when displaying the page. The only way I got it to work is to not precompile the js files of pdf.js.
This is an experimental solution. I just compiled the mozilla pdf.js library and integrated it with a rails engine. You can use the gem I made here: https://github.com/normancapule/pdfjs-rails-engine.
So I'm working on a just for fun project to get practice using HTML/CSS/Javascript.
I'm using Aptana to write all my code and it is currently set up to run and work in a browser (obviously) it's a text adventure game.
It would be really cool though to be able to compile the code into an executable file that runs in its own window, not in a browser.
Is this something relatively easy to accomplish?
Thanks in advance for any help! :)
FF and Chrome provide a function to run a custom website in an app mode. That means no menubars, no addressbar and a complete window for the website. Maybe this is already what you are looking for.
http://www.rarst.net/software/dedicated-web-app-window/
https://superuser.com/questions/33548/starting-google-chrome-in-application-mode
https://superuser.com/questions/171235/does-internet-explorer-have-something-equivalent-to-chromes-app-mode
But if you are interested in compiled code for speeding up your game, this is not the way to achieve this.
For Windows as OS
see http://www.autoitscript.com/autoit3/docs/libfunctions/_IECreateEmbedded.htm
AutoIt is a scripting language for basically everything (with automation). SciTE is the editor to go.
In the example of the _IECreateEmbedded function, just change:
_IENavigate($oIE, "http://www.autoitscript.com")
to
_IENavigate($oIE, "file://.../thegame.html")
Very simple, you just have to copy-paste it and build it - you can even build it Online: AutoIt Online Compiler
There are many different ways you can acheive this.
If you're only targeting windows machines, then creating a HTA would be the simplest approach.
The modification to the structure of your existing code would be minimal, its essentially changing the file type and adding an extra couple of tags in. If you wanted a single file, instead of an exe and any resources (images etc) that you use you would have to base64 encode your images, and insert external scripts into the main page.
for information about embedding images and icons into a hta: http://www.john-am.com/2010/07/building-a-self-contained-hta-with-embedded-images-and-icons/
You could also use AppJS, node-webkit or similar type projects, but they would add around 30MB of stuff thats not being used.
I am trying to generate pdf files in a cakePHP app, but so far get only html to be included in a file. The problem is that the main content of the page (calendar) is produced by the javascript which is completely ignored when generating a PDF. What is the best solution in this case?
I really appreciate your help.
If you use something like wkhtmltopdf it should work as it contains the actual rendering code used in chrome.
There is a plugin ready made that works out the box (after installing wkhtmltopdf)
I want to scrape data from www.marktplaats.nl . I want to analyze the scraped description, price, date and views in Excel/Access.
I tried to scrape data with Ruby (nokogiri, scrapi) but nothing worked. (on other sites it worked well) The main problem is that for example selectorgadget and the add-on firebug (Firefox) don’t find any css I can use to scrape the page. On other sites I can extract the css with selectorgadget or firebug and use it with nokogiri or scrapi.
Due to lack of experience it is difficult to identify the problem and therefore searching for a solution isn’t easy.
Can you tell me where to start solving this problem and where I maybe can find more info about a similar scraping process?
Thanks in advance!
I used excel web query and it works perfect. You can find a lot about scraping with excel on youtube if you search for mrexcel.
Thanks, Mello
You can try IRobotSoft web scraper. It has good frame support and is free.
Iframes aren't a problem - just access the embedded iframe URL directly. You will find that it redirects in the browser unless you disable JavaScript.
Description and date can be extracted straight from HTML source. However prices are images which will make scraping them more cumbersome.