Embedding external link inside Electron - javascript

I've got an array of instagram media data prepared for embedding. In the end of each html there's a script <script async defer src="//platform.instagram.com/en_US/embeds.js"></script>.
By default electron refers this link to the file:// protocol. Adding http:// in src return js file correctly.
So link <script async defer src="http://platform.instagram.com/en_US/embeds.js"></script> works fine.
How can I solve this problem except parsing data and rewriting link?

The problem you are facing is that // is a protocol relative URL that will use whatever protocol the file requesting it is. You can read more about that here.
your best bet to override this default behaviour would be to either parse the data and rewrite the links with something like a regex query.
or you can attempt to intercept the file protocol, verify that the url is one you want to intercept, then reformat the url, you can learn about how to do this here. with an example that does not include the verification of the paths you wish to intercept below.
const {app, protocol} = require('electron')
const path = require('path')
app.on('ready', () => {
protocol.registerFileProtocol('file', (request, callback) => {
const url = request.url.substr(7)
callback({path: path.normalize(`http://${__dirname}/${url}`)})
}, (error) => {
if (error) console.error('Failed to register protocol')
})
})

Related

How to make a custom url for a file in electron

I am trying to build a mini browser using Electron.js. Is it possible to make urls like chrome://settings or about:config, so that when the user goes to that link I can show an html file? I basically want to associate a url with a file in electron.
You could use Data URIs, and base64-encode the contents of your data as a link. You can use Javascript to encode and decode binary data, then you just specify the MIME type at the start.
If you go to the following URL in a browser for example you'll see a png decoded and rendered:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==
The MDN Web doc in the first link mentions the process of base64 encoding an HTML file.
Alternatively, if you just want to force the download of a link you could add the download attribute to your anchor.
You can use did-start-navigation to detect when they go to chrome://settings/ then intercept that and tell it to go to https://stackoverflow.com/ instead.
Here's the code:
mainWin.webContents.on('did-start-navigation', function (evt, navigateUrl) {
if (navigateUrl == 'chrome://settings/') {
evt.preventDefault();
setTimeout(function () { // Without this it just crashes, no idea why.
mainWin.loadURL('https://stackoverflow.com/');
}, 0);
}
});
I tried the `will-navigate` event, but it didn't work.
Docs for: did-start-navigation
After a little searching at npm, I found a package that does exactly what I want, it's electron protocols. It's a simple way to add custom protolcs in Electron, Here's an example"
const protocols = require('electron-protocols');
const path = require('path');
protocols.register('browser', uri => {
let base = app.getAppPath();
if(uri.hostname == "newtab"){
return path.join(base,"newtab.html")
}
});
In this example, if you go to the link browser://newtab, it opens newtab.html. And if you type location.href the DevTools it shows browser://newtab there too

How to get all links from a website with puppeteer

Well, I would like a way to use the puppeteer and the for loop to get all the links on the site and add them to an array, in this case the links I want are not links that are in the html tags, they are links that are directly in the source code, javascript file links etc... I want something like this:
array = [ ]
for(L in links){
array.push(L)
//The code should take all the links and add these links to the array
}
But how can I get all references to javascript style files and all URLs that are in the source code of a website?
I just find a post and a question that teaches or shows how it gets the links from the tag and not all the links from the source code.
Supposing you want to get all the tags on this page for example:
view-source:https://www.nike.com/
How can I get all script tags and return to console? I put view-source:https://nike.com because you can get the script tags, I don't know if you can do it without displaying the source code, but I thought about displaying and getting the script tag because that was the idea I had, however I do not know how to do it
It is possible to get all links from a URL using only node.js, without puppeteer:
There are two main steps:
Get the source code for the URL.
Parse the source code for links.
Simple implementation in node.js:
// get-links.js
///
/// Step 1: Request the URL's html source.
///
axios = require('axios');
promise = axios.get('https://www.nike.com');
// Extract html source from response, then process it:
promise.then(function(response) {
htmlSource = response.data
getLinksFromHtml(htmlSource);
});
///
/// Step 2: Find links in HTML source.
///
// This function inputs HTML (as a string) and output all the links within.
function getLinksFromHtml(htmlString) {
// Regular expression that matches syntax for a link (https://stackoverflow.com/a/3809435/117030):
LINK_REGEX = /https?:\/\/(www\.)?[-a-zA-Z0-9#:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()#:%_\+.~#?&//=]*)/gi;
// Use the regular expression from above to find all the links:
matches = htmlString.match(LINK_REGEX);
// Output to console:
console.log(matches);
// Alternatively, return the array of links for further processing:
return matches;
}
Sample usage:
$ node get-links.js
[
'http://www.w3.org/2000/svg',
...
'https://s3.nikecdn.com/unite/scripts/unite.min.js',
'https://www.nike.com/android-icon-192x192.png',
...
'https://connect.facebook.net/',
... 658 more items
]
Notes:
I used the axios library for simplicity and to avoid "access denied" errors from nike.com. It is possible to use any other method to get the HTML source, like:
Native node.js http/https libraries
Puppeteer (Get complete web page source html with puppeteer - but some part always missing)
Although the other answers are applicable in many situations, they will not work for client-side rendered sites. For instance, if you just do an Axios request to Reddit, all you'll get is a couple of divs with some metadata. As Puppeteer actually gets the page and parses all JavaScript in a real browser, the websites' choice of document rendering becomes irrelevant for extracting page data.
Puppeteer has an evaluate method on the page object which allows you to run JavaScript directly on the page. Using that, you easily extract all links as follows:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
const pageUrls = await page.evaluate(() => {
const urlArray = Array.from(document.links).map((link) => link.href);
const uniqueUrlArray = [...new Set(urlArray)];
return uniqueUrlArray;
});
console.log(pageUrls);
await browser.close();
})();
yes you can get all the script tags and their links without opening view source.
You need to add dependency for jsdom library in your project and then pass the HTML response to its instance like below
here is the code:
const axios = require('axios');
const jsdom = require("jsdom");
// hit simple HTTP request using axios or node-fetch as you wish
const nikePageResponse = await axios.get('https://www.nike.com');
// now parse this response into a HTML document using jsdom library
const dom = new jsdom.JSDOM(nikePageResponse.data);
const nikePage = dom.window.document
// now get all the script tags by querying this page
let scriptLinks = []
nikePage.querySelectorAll('script[src]').forEach( script => scriptLinks.push(script.src.trim()));
console.debug('%o', scriptLinks)
Here I have made CSS selector for <script> tags that have src attribute inside them.
You can write same code in using puppeteer, but it will take time opening the browser and everything and then getting its pageSource.
you can use this to find the links and then do whatever you want to use with them using puppeteer or anything.

Tampermonkey To open multiple javascript in href in new tab [duplicate]

Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});

How can I request the default favicon using Node?

Given a domain, how can I request the default favicon using Node? The default favicon location is at domain/favicon.ico Can I use a simple https.get()? There seem to be at least 5 native ways to do this?
So far the first method does not work. I get ERR_INVALID_DOMAIN_NAME for this code:
const https = require('https');
const url = 'imdb.com/favicon.io';
https.get(url, (resp) => {
let data = '';
resp.on('data', (chunk) => {
data += chunk;
});
resp.on('end', () => {
console.log(data);
});
}).on("error", (err) => {
console.log("Error: " + err.message);
});
If I change the URL to https://imdb.com/favicon.ico I get
<p>The document has moved here.</p>
If I change the URL to https://www.imdb.com/favicon.ico I get:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved here.</p>
</body></html>
Finally if I change the URL to https://ia.media-imdb.com/images/G/01/imdb/images/favicon-2165806970 I get what looks like a blob or binary file or image.
How can I do this programmatically?
If I recall PHP had a method that knew how to follow the "redirects", but what about Node?
The default favicon location is at domain/favicon.ico
he default favicon path is /favicon.ico, but you need an absolute URL (schema://host/path) in order to make a request.
How can I do this programmatically?
If using core nodejs you need to manually follow the redirects via response.headers['location'], in some sort of recursive callback arrangement. Alternatively you could use the modules request or follow-redirects.
I get what looks like a blob or binary file or image.
Indeed, that's the image. As you can see from response.headers['content-type'] it is in the image/x-icon format, also known as ICO, as expected for a file called favicon.ico.
data += chunk
Note that because you're concatenating with strings instead of buffers, this will cause image corruption in current NodeJS versions. It tries to treat the binary data as UTF-8, replacing unknown sequences. Instead you presumably just want to pipe to an fs.WriteStream.

Reading a text file then setting as variables to use for authentication in Javascript [duplicate]

At the moment, due to the security policy Chromium can not read local files via ajax without --allow-file-access-from-files. But I currently need to create a web application where the database is a xml-file (in the extreme case, json), located in one dir with index.html. It is understood that the user can run this application locally. Are there workarounds for reading xml- (json-) file, without wrapping it in a function and change to js extension?
loadXMLFile('./file.xml').then(xml => {
// working with xml
});
function loadXMLFile(filename) {
return new Promise(function(resolve, reject) {
if('ActiveXObject' in window) {
// If is IE
var xmlDoc = new ActiveXObject('Microsoft.XMLDOM');
xmlDoc.async = false;
xmlDoc.load(filename);
resolve(xmlDoc.xml);
} else {
/*
* how to read xml file if is not IE?
* ...
* resolve(something);
*/
}
}
}
Accessing file: protocol at chromium using XMLHttpRequest() or <link> element without --allow-file-access-from-files flag set at chromium instance launch is not enabled by default.
--allow-file-access-from-files
By default, file:// URIs cannot read other file:// URIs. This is an
override for developers who need the old behavior for testing.
At the moment, due to the security policy Chromium can not read local
files via ajax without --allow-file-access-from-files. But I
currently need to create a web application where the database is a
xml-file (in the extreme case, json), located in one dir with
index.html. It is understood that the user can run this application
locally. Are there workarounds for reading xml- (json-) file, without
wrapping it in a function and change to js extension?
If user is aware that local files are to be used by the application you can utilize <input type="file"> element for user to upload file from user local filesystem, process file using FileReader, then proceed with application.
Else, advise user that use of application requires launching chromium with --allow-file-access-from-files flag set, which can be done by creating a launcher for this purpose, specifying a different user data directory for the instance of chromium. The launcher could be, for example
/usr/bin/chromium-browser --user-data-dir="/home/user/.config/chromium-temp" --allow-file-access-from-files
See also How do I make the Google Chrome flag “--allow-file-access-from-files” permanent?
The above command could also be run at terminal
$ /usr/bin/chromium-browser --user-data-dir="/home/user/.config/chromium-temp" --allow-file-access-from-files
without creating a desktop launcher; where when the instance of chromium is closed run
$ rm -rf /home/user/.config/chromium-temp
to remove the configuration folder for the instance of chromium.
Once the flag is set, user can include <link> element with rel="import" attribute and href pointing to local file and type set to "application/xml", for option other than XMLHttpRequest to get file. Access XML document using
const doc = document.querySelector("link[rel=import]").import;
See Is there a way to know if a link/script is still pending or has it failed.
Another alternative, though more involved, would be to use requestFileSystem to to store the file at LocalFileSystem.
See
How to use webkitRequestFileSystem at file: protocol
jQuery File Upload Plugin: Is possible to preserve the structure of uploaded folders?
How to Write in file (user directory) using JavaScript?
Or create or modify a chrome app and use
chrome.fileSystem
See GoogleChrome/chrome-app-samples/filesystem-access.
The simplest approach would be to provide a means for file upload by affirmative user action; process the uploaded file, then proceed with the application.
const reader = new FileReader;
const parser = new DOMParser;
const startApp = function startApp(xml) {
return Promise.resolve(xml || doc)
};
const fileUpload = document.getElementById("fileupload");
const label = document.querySelector("label[for=fileupload]");
const handleAppStart = function handleStartApp(xml) {
console.log("xml document:", xml);
label.innerHTML = currentFileName + " successfully uploaded";
// do app stuff
}
const handleError = function handleError(err) {
console.error(err)
}
let doc;
let currentFileName;
reader.addEventListener("loadend", handleFileRead);
reader.addEventListener("error", handleError);
function handleFileRead(event) {
label.innerHTML = "";
currentFileName = "";
try {
doc = parser.parseFromString(reader.result, "application/xml");
fileUpload.value = "";
startApp(doc)
.then(function(data) {
handleAppStart(data)
})
.catch(handleError);
} catch (e) {
handleError(e);
}
}
function handleFileUpload(event) {
let file = fileUpload.files[0];
if (/xml/.test(file.type)) {
reader.readAsText(file);
currentFileName = file.name;
}
}
fileUpload.addEventListener("change", handleFileUpload)
<input type="file" name="fileupload" id="fileupload" accept=".xml" />
<label for="fileupload"></label>
use document.implementation.createDocument("", "", null)
instead of new ActiveXObject('Microsoft.XMLDOM').
You can find the API through GOOGLE. Good luck.
If I understand correctly, the deliverable is intended to run locally so you will not be able to set any flags for local file access on a user's machine. Something I've done in a pinch is to pack it up as an executable with something like nw.js and keep the external data files. Otherwise, you're probably looking at loading as script using a JSON schema in a JS file.
I had a similar problem before. I solved by simply embedding the XML file into the HTML using PHP. Since the application is loaded locally from disk, size, cache etc. are not a concern.
If you're using Webpack, you can instead directly import the file using a loader like this or this, in which case the file is included into the resulting bundled javascript.
You can load XML through a string of text using DOMParser, Just load your file and parse the text using the .parseFromString. You could use an if statement containing (window.DOMParser) to check if the DOMParser is supported

Categories