This is an example from the documentation.
var client = new WebTorrent()
var torrentId = 'magnet:?xt=urn:btih:08ada5a7a6183aae1e09d831df6748d566095a10&dn=Sintel&tr=udp%3A%2F%2Fexplodie.org%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.empire-js.us%3A1337&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&tr=wss%3A%2F%2Ftracker.btorrent.xyz&tr=wss%3A%2F%2Ftracker.fastcast.nz&tr=wss%3A%2F%2Ftracker.openwebtorrent.com&ws=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2F&xs=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2Fsintel.torrent'
client.add(torrentId, function (torrent) {
// Torrents can contain many files. Let's use the .mp4 file
//var file = torrent.files.find(function (file) {
//return file.name.endsWith('.mp4')
//})
// no console.log !!
console.log(torrent)
// Display the file by adding it to the DOM. Supports video, audio, image, etc. files
torrent.files[0].appendTo('body')
})
An example works well.
But if I change the magnet link to another but nothing happens.
The link to which I am changing is valid.
magnet:?xt=urn:btih:C45CE38E4508E775E49EB2A6841C814D1A8AD375&tr=http%3A%2F%2Fbt3.t-ru.org%2Fann%3Fmagnet
but does not work with this link. Not a single mistake or nothing at all
I have had similar issues recently trying to work this out. Only instant.io (using a turn server) consistently works. Very little webRTC stuff works for me.
I think the reason the template provided by WebTorrent works and no others is because the model contains a magnet which has a link to the torrent file on their website.
I suspect they are seeding it or even just providing it via some other means.
xs=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2Fsintel.torrent
Whoever created instant.io took the web torrent template and made it work. WebRTC is an absolute nightmare, and the web torrent template/site doesn't even get the WebSocket connections right (for me, at least).
If you are looking to pass on relatively small amounts of data, then relaying via your WebSockets is far more manageable.
If you want to create something like WebTorrent, that works look at instant.io's Github. You'll need to set up a server and configure a turn server. WebRTC is like trying to configure a graphics card in 1992. Good luck.
Related
I'm working with an extremely old database system containing people and associated PDFs. I can access most data over a webbrowser, however PDFs cannot be requested via web-api - I do however have the liberty of loading any javascript library, and use chrome webdev console.
I want to get a proof of principle working, where I can load a person's PDFs. But I'm not quite sure what the best approach is.
Idea to upload a file to the website's local storage in my browser (since it's viewed several times). However I seem to be lacking a good library to save/load files from the cache directory. This library wasn't updated since 2016 and Filesaver.js doens't seem to be keen on loading the files after saving. Using a fully-fledged database implementation seems overkill (most files will be <= 5-10MB)
Loading a file from local storage (even if dir is added to workspace in chrome) seems completely impossible, that would've been an alternative
adding the local file path to a <a href=""> element did not work in chrome
Is there a feasible approach to manage/associate PDF files on my local drive with the website I'm working with (and have full client-side control, i.e. can load any library)?
Note: Access is restricted, no chrome addons can be used and chrome cannot be started using custom flags
I don't exactly know what you are asking for, but this code will get all the pdfs in a selected directory and display them and also makes a list of all the file objects. This will only work in a "secure context" and on chrome
(it also wont run in a sandbox like a stackoverflow code snippet)
js
let files = [];
async function r() {
for await (const [_, h] of (await window.showDirectoryPicker()).entries()) files.push(await h.getFile());
files = files.filter(f => f.type === "application/pdf");
for (const f of files) {
let e = document.createElement("embed");
e.src = URL.createObjectURL(f), e.type = f.type;
document.body.appendChild(e);
}
}
html
<button onclick="r()">read PDFs</button>
also you can probably use this if you need to send the local PDF somewhere
not sure this answers the question but i hope it helps
Since ActiveX controls are no longer available browsers can display a PDF or a user can download the pdf.
For any more control over that I suspect you could try render the pdf using a JavaScript library like https://mozilla.github.io/pdf.js/
For full control you wont be in a position to control the PDF version, you could alternatively render the PDFs to images on the server and display image versions of the pages.
I’m a bit new to javascriipt/nodejs and its packages. Is it possible to download a file using my local browser or network? Whenever I look up scraping html files or downloading them, it is always done through a separate package and their server doing a request to a given url. How do I make my own computer download a html file as if I did right click save as on a google chrome webpage without running into any server/security issues and errors with javascript?
Fetching a document over HTTP(S) in Node is definitely possible, although not as simple as some other languages. Here's the basic structure:
const https = require(`https`); // use http if it's an http url;
https.get(URLString, res => {
const buffers = [];
res.on(`data`, data => buffers.push(data));
res.on(`end`, ()=>{
const data = Buffer.concat(buffers);
/*
from here you can do what you want with the data. You can write it to a file
with fs, you can console.log it using data.toString(), etc.
*/
});
})
Edit: I think I missed the main question you had, give me a sec to add that.
Edit 2: If you're comfortable with doing the above, the way you access a website the same way as your browser is to open up the developer tools (F12 on Chrome) go to the network tab, find the request that the browser has made, and then using http(s).get(url, options, callback), set the exact same headers in the options that you see in your browser. Most of the time you won't need all of them, all you'll need is the authentication/session cookie.
I've converted an existing web application (HTML5, JS, CSS, etc.) into a Windows UWP app so that (hopefully) I can distribute it via the Windows Store to Surface Hubs so it can run offline. Everything is working fine, except PDF viewing. If I open a PDF in a new window, the Edge-based browser window simply crashes. If I open an IFRAME and load PDFJS into it, that also crashes. What I'd really like to do is just hand off the PDF to the operating system so the user can view it in whatever PDF viewer they have installed.
I've found some windows-specific Javascript APIs that seem promising, but I cannot get them to work. For example:
Windows.System.Launcher.launchUriAsync(
new Windows.Foundation.Uri(
"file:///"+
Windows.ApplicationModel.Package.current.installedLocation.path
.replace(/\//g,"/")+"/app/"+url)).then(function(success) {
if (!success) {
That generates a file:// URL that I can copy into Edge and it shows the PDF, so I know the URL stuff is right. However, in the application it does nothing.
If I pass an https:// URL into that launchUriAsync function, that works. So it appears that function just doesn't like file:// URLs.
I also tried this:
Windows.ApplicationModel.Package.current.installedLocation.getFileAsync(url).then(
function(file) { Windows.System.Launcher.launchFileAsync(file) })
That didn't work either. Again, no error. It just didn't do anything.
Any ideas of other things I could try?
-- Update --
See the accepted answer. Here is the code I ended up using. (Note that all my files are in a subfolder called "app"):
if (location.href.match(/^ms-appx:/)) {
url = url.replace(/\?.+/, "");
Windows.ApplicationModel.Package.current.installedLocation.getFileAsync(("app/" + url).replace(/\//g,"\\")).then(
function (file) {
var fn = performance.now()+url.replace(/^.+\./, ".");
file.copyAsync(Windows.Storage.ApplicationData.current.temporaryFolder,
fn).then(
function (file2) {
Windows.System.Launcher.launchFileAsync(file2)
})
});
return;
}
Turns out you have to turn the / into \ or it won't find the file. And copyAsync refuses to overwrite, so I just use performance.now to ensure I always use a new file name. (In my application, the source file names of the PDFs are auto-generated anyway.) If you wanted to keep the filename, you'd have to add a bunch of code to check whether it's already there, etc.
LaunchFileAsync is the right API to use here. You can't launch a file directly from the install directory because it is protected. You need to copy it first to a location that is accessible for the other app (e.g. your PDF viewer). Use StorageFile.CopyAsync to make a copy in the desired location.
Official SDK sample: https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/AssociationLaunching
I just thought I'd add a variation on this answer, which combines some details from above with this info about saving a blob as a file in a JavaScript app. My case is that I have a BLOB that represents the data for an epub file, and because of the UWP content security policy, it's not possible simply to force a click on a URL created from the BLOB (that "simple" method is explicitly blocked in UWP, even though it works in Edge). Here is the code that worked for me:
// Copy BLOB to downloads folder and launch from there in Edge
// First create an empty file in the folder
Windows.Storage.DownloadsFolder.createFileAsync(filename,
Windows.Storage.CreationCollisionOption.generateUniqueName).then(
function (file) {
// Open the returned dummy file in order to copy the data to it
file.openAsync(Windows.Storage.FileAccessMode.readWrite).then(function (output) {
// Get the InputStream stream from the blob object
var input = blob.msDetachStream();
// Copy the stream from the blob to the File stream
Windows.Storage.Streams.RandomAccessStream.copyAsync(input, output).then(
function () {
output.flushAsync().done(function () {
input.close();
output.close();
Windows.System.Launcher.launchFileAsync(file);
});
});
});
});
Note that CreationCollisionOption.generateUniqueName handles the file renaming automatically, so I don't need to fiddle with performance.now() as in the answer above.
Just to add that one of the things that's so difficult about UWP app development, especially in JavaScript, is how hard it is to find coherent information. It took me hours and hours to put the above together from snippets and post replies, following false paths and incomplete MS documentation.
You will want to use the PDF APIs https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/PdfDocument/js
https://github.com/Microsoft/Windows-universal-samples/blob/master/Samples/PdfDocument/js/js/scenario1-render.js
Are you simply just trying to render a PDF file?
I am using angular and ASP.NET Web API to allow users to download files that are generated on the server.
HTML Markup for download link:
<img src="/content/images/table_excel.png">
<a ng-click="exportToExcel(report.Id)">Excel Model</a>
<a id="report_{{report.Id}}" target="_self"></a>
The last anchor tag is there to serve as a place holder for an automatic click event. The visible anchor calls the exportToExcel method to initiate the call to the server and begin creating the file.
$scope.exportToExcel = function(reportId) {
reportService.excelExport(reportId, function (result) {
var url = "/files/report_" + reportId + "/" + result.data.Model.fileName;
var dLink = document.getElementById("report_" + reportId);
dLink.href = url;
dLink.setAttribute('download', result.data.Model.fileName);
dLink.click();
});
}
The Web API code creates an Excel file. The file, on the server is about 279k, but when it is downloaded on the client it is only 7k. My first thought was that the automatic click might be happening before the file is completely written. So, I added a 10 second $timeout around the click event as a test. It failed with the same result.
This seems to only be happening on our remote QA server. On my local development server I always get the entire file back. I am at a loss as to why this might be happening. We have similar functionality where files are constructed from a database blob and saved to the local disk for download. The same method is employed for the client side download and that seems to work fine. I am wondering if anyone else has run into a similar issue.
Update
After the comment by SilentTremmor we think it actually may be IIS or some sort of Sever issue. Originally, we didn't think it could be, but after some digging it may be. It seems the instance of the client code is only allowing 7k of data to be downloaded. It doesn't matter what we try to download the result is always the same.
It turns out the API application was writing the file to a different instance of our application. The client code had no idea and was trying to download a file that did not exist. So, when the download link was creating the file it was empty, thus the small file size.
So I've been researching this for a couple days and haven't come up with anything conclusive. I'm trying to create a (very) rudimentary liveblogging setup because I don't want to pay for something like CoverItLive. My process is: Local HTML file > Cloud storage (Dropbox/Drive/etc) > iframe on content page. All that works, and with some CSS even looks pretty nice despite the less-than-awesome approach. But here's the thing: the liveblog itself is made up of an HTML table, and I have to manually copy/paste the code for a new row, fill in the timestamp, write the new message, and save the document (which then syncs with the cloud and shows up in the iframe). To simplify the process I've made another HTML file which I intend to run locally and use to add entries to the table automatically. At the moment it's just a bunch of input boxes and some javascript to automate the timestamp and write the table row from the input data.
Code, as it stands now: http://jsfiddle.net/LukeLC/999bH/
What I'm looking to do from here is find a way to somehow export the generated table data to another .html file on my hard drive. So far I've managed to get this code...
if(document.documentElement && document.documentElement.innerHTML){
var a=document.getElementById("tblive").innerHTML;
a=a.replace(/</g,'<');
var w=window.open();
w.document.open();
w.document.write('<pre><tblive>\n'+a+'\n</tblive></pre>');
w.document.close();
}
}
...to open just the generated table code in a new window, and sure, I can save the source from there, but the whole point is to eliminate steps like that from the process.
How can I tell the page to save the generated code to a separate .html file when I click on the 'submit' button? Again, all of this happens locally, not on a server.
I'm not very good with javascript--and maybe a different language will be necessary--but any help is much appreciated.
I suppose you could do something like this:
var myHTMLDoc = "<html><head><title>mydoc</title></head><body>This is a test page</body></html>";
var uri = "data:application/octet-stream;base64,"+btoa(myHTMLDoc);
document.location = uri;
BTW, btoa might not be cross-browser, I think modern browsers all have it, but older versions of IE don't. AFAIK base64 isn't even needed. you might be able to get away with
var uri = "data:application/octet-stream,"+myHTMLDoc;
Drawbacks with this is that you can't set the filename when it gets saved
You cant do this with javascript but you can have a HTML5 link to open save dialogue:
<a href="pageToDownload.html" download>Download</a>
You could add some smarts to automate it on the processed page after the POST.
fiddle : http://jsfiddle.net/ghQ9M/
Simple answer, you can't.
JavaScript is restricted to perform such operations due to security reasons.
The best way to accomplish that, would be, to call a server page that would write
the new file on the server. Then from javascript perform a POST request to the
server page passing the data you want to write to the new file.
If you want the user to save the page to it's file system, this is a different
problem and the best approach to accomplish that, would be to, notify the user/ask him
to save the page, that page could be your new window like you are doing w.open().
Let me do some demonstration for you:
//assuming you know jquery or are willing to use it :)
var html = $("#tblive").html().replace(/</g, '<');
//generating your download button
$.post('generate_page.php', { content: html })
.done(function( data ) {
var filename = data;
//inject some html to allow user to navigate to the new page (example)
$('#tblive').parent().append(
'Check your Dynamic Page!');
// you data here, is the response from the server so you can return
// your new dynamic page file name here.
// and maybe to some window.location="new page";
});
On the server side, something like this:
<?php
if($_REQUEST["content"]){
$pagename = uniqid("page_", true) . '.html';
file_put_contents($pagename, $_REQUEST["content"]);
echo $pagename;
}
?>
Some notes, I haven't tested the example, but it works in theory.
I assume that with this the effort to implement it should be minimal, assuming this solves your problem.
A server based solution:
You'll need to set up a server (or your PC) to serve your HTML page with headers that tell your browser to download the page instead of processing the HTML markup. If you want to do this on your local machine, you can use software such as WAMP (or MAMP for Mac or LAMP for Linux) that is basically a web server in a .exe. It's a lot of hassle but it'll work.