I am trying to migrate images with a small chrome extension i have built. the extension consists of 5 files:
popup.html - the plugin html document
Has some html buttons
also has script link to my settings.js file that listens for the image downlaod button to be clicked and send a message to my content script : run.js to find the images
run.js - content script ( has access to the webpages DOM ).
This script recieves the message from run.js which then finds all the images names and image links i want to download. It puts them into an object and sends them via message to the backgorund script : bootstrap.js
bootstrap.js - runs the background page and has access to the chrome api.
var Downloads = [];
chrome.extension.onMessage.addListener(function(request, sender, sendResponse)
{
if(request.message.method == 'download'){
for( var d = 0; d <= request.message.imageLinks.length; d++ ){
chrome.downloads.download( {url: request.message.imageLinks[d],
filename: request.message.imageName[d]}, function(id){
});
sendResponse('done');
}
}
});
This all works fine and dandy. It loops through the images and downloads them.
What i need to be able to do now is: Take the images i just downloaded, and insert them into the file upload fields on the other website (which i have open in another tab) which have the same field names ect..
I see chrome has a
//Initiate dragging the downloaded file to another application. Call in a javascript ondragstart handler.
chrome.downloads.drag(integer downloadId)
At first i thought this might work since you can manually drag the downloaded image into the html file upload field without having to click and select the file. But i can't find any documentation / examples on it.
Does anyone know it is possible to get accomplish this with javascript / chrome api?
First you need to access the content of the files you downloaded.
For that, you need to add a permission to file://*in your manifest file.
Then you can loop through your downloaded files with chrome.downloads.search for instance and get each file's path in the system (DownloadItem.filename property). For each file, make a GET XMLHttpRequest to the url file://{filepath} to get the file's content.
Once you have that content, you can convert it to a DataUrl and programmatically add the file to the FormData of your form (not the actual inputs though, just the FormData submitted in the end). See this answer for this part.
Related
I have a mobile app that wraps around the web-app, using webview.
The web-app has a button to open a large .zip file (e.g. 100 MB).
The user clicks a button, and selects a .zip file.
This triggers an onChange function with a variable of type File (Blob), which includes attributes like:
file name
file size
file type (application/zip)
The javascript code then parses the .zip file, extracts specific data within it and uses it within the web-app.
This works well within the web-app, when the app is called via the Chrome browser.
For example when operated in chrome browser on an Android phone, I can pull the .zip file and open it in the web-app.
I want to do the same but using the mobile app.
I am able to pick up the .zip file using a File Chooser, and pass it to Webview but I have problems to fetch the file from the Javascript code.
For reference, I am able to pass an image, by creating a data_uri using stringBuilder and passing the content (as data:image/jpeg;base64).
But the zip file is much larger.
When calling fetch(fileUri) from the Javascript side I'm getting errors.
I'm using the following uri
/storage/emulated/0/Android/data/com.example/files/Download/file2.zip
The fetch succeeds but returns a blob with size of 165 (i.e. not the actual size of the file) which hosts the error message:
{
"error": "Not Found",
"message": "The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again."
}
The program flow is like so:
I select a .zip file via FileChooser.
In onActivityResult, the uri value is /document/msf:12858 (seen via uri = intent.getData();)
The uri needs to be mapped into a real path file url, such that the fileUrl will be passed to webview.
Webview will then fetch the file using the fileUrl.
I searched how to get the real path file url when selecting a file with FileChooser, and found
this, and this links.
I wasn't able to get the real file path, so I decided to read the file and write it to another location, so I can get a file path. (this is not efficient and done just to check the functionality).
I create the new file using the following code:
InputStream stream = context.getContentResolver().openInputStream(uri);
File file2 = new File(context.getExternalFilesDir(Environment.DIRECTORY_DOWNLOADS), "file2.zip");
writeBytesToFile(stream, file2);
I don't see any errors when creating the file, and when creating the file, the number of bytes that are read and written to the new file are as expected.
For file2, I get a value of:
/storage/emulated/0/Android/data/com.example/files/Download/file2.zip
Then, within the Javascript code I fetch this file path.
But I'm getting a Blob with the "file-not-found" content as above.
So:
How can I verify that the file is indeed created and that the path can be fetched from webview?
How can I get the real file path of the original selected file, so I don't have to read and write the original file to new location just to get the file path?
Thanks
I was able to get the file from external storage by doing the following steps:
create an initial uri (uri1)
The uri is created by:
creating a temporary file (file1) in the storage dir via
context.getExternalFilesDir(Environment.DIRECTORY_DOWNLOADS)
I'm not sure why a temporary file need to be created but if I don't create a file I cannot get the uri.
createFile3
get the uri via
Uri uri1 = FileProvider.getUriForFile(context, "com.example.android.fileprovider", file1);
create an intent with the following attributes:
Intent.ACTION_OPEN_DOCUMENT
category: Intent.CATEGORY_OPENABLE
type: "application/zip"
extra attribute: fileIntent.putExtra(DocumentsContract.EXTRA_INITIAL_URI, uri1);
this opens a dialog box for selecting openable zip files in the Downloads directory,
after the file is selected, a new uri (uri2) is created that includes the name of the selected file.
extract the name of the file via
String fileName = getFileName(context, uri2);
create the dirPath by appending the filename
dirPath = "/data/user/0/com.example/" + fileName;
if the dirPath does not exist (first time), write the file to its dirPath location.
on successive ocassions dirPath exists, so there is no need to re-write the file.
open the file with regular Java means, e.g. via
ZipFile zip = new ZipFile(dirPath);
I want to display autonomous "sites" on a html page (Say "root").
Those "sites" contain a landing page : index.html and a collection of *.css, .js,.png)
By autonomous, I mean those sites does not have external dependencies and all paths are relative = you can copy them in any directory or host them anywhere and they'll work.
Those sites are zipped in a archive that contains all necessary files.
Say I got no problem with the download and got all the files in memory (as path/uint8 array)
How could I display the site in a safe way ?
I can parse the index.html, change all the src and href for data-url of the original files and load it in an iframe.
It works rather well but it breaks where there are scripts like this
if (extension == "pdf")
img.src = "images/thumb-pdf.png"
Is there any way to control the url served by the iframe?
Some kind of proxy?
Can I intercept "images/thumb-pdf.png" to serve MemoryCacheOfAllFiles["images/thumb-pdf.png"].toDataURL() instead ?
PS: Of course I got no control on those sites and I can't store them on server (it would be to easy)
Proxy are possible in javascript one got to use a worker and listen for the "fetch" event to push the virtual content.
html pushed in iframe:
...
navigator.serviceWorker.register('worker.js').then(function() {
logInstall('Installing...');
})
...
worker.js:
...
self.onfetch = function(event) {
var cached = (mycache[event.request.url]); // do we have content
if (cached)
event.respondWith(new Response(cached.content,{
headers:{"Content-Type",cached.mimetype}
}
else
event.respondWith(fetch(event.request));
}
...
Full code by mozilla to unzip and serve content
https://serviceworke.rs/cache-from-zip.html
I'm working on an ASP.NET MVC site where users are supposed to be able to upload images and PDF documents and view them at a later point. I noticed that certain filenames cause the attachments not to show.
I'm using Dropzone.js to provide a drag and drop field. The files are saved by a controller method using HttpPostedFileBase. When a user opens the gallery view, the respective controller method lists all previously uploaded files (filenames) in a ViewBag entry. The view then creates a thumbnail for each image:
foreach (var path in (IEnumerable<string>)ViewBag.Attachments){
...
<img class="attachments-thumbnail" style="cursor: pointer" src="#Url.Content(path)" alt="" />
...
}
Now when I upload an image with the filename h &.jpg, the thumbnail isn't shown. In Firefox, the console shows the error X GET http://localhost:54305/Content/Images/Attachments/1/h%20&.jpg. So it's looking for h%20&.jpg instead of h &.jpg, even though the file was saved to the server as intended as h &.jpg.
On the other hand, when I upload a pdf with the name radio%2E11%2E3%2E1852937.pdf, this again gets saved with its original filename, but this time the error appears the other way around: X GET XHR http://localhost:54305/Content/Images/Attachments/1/radio.11.3.1852937.pdf.
I'm not sure where this happens. When I use the inspector tool, the thumbnail <img> tag points to the correct filename, i.e. the one present on the server. I imagine this must be a very frequent use case, so is there any C# method that makes a filename safe for web use or will I have to rename the files myself? Ideally of course I'd like to keep at least the filenames of the pdfs the way the user chose them, as they may indicate file content, but it seems to be unsafe..
I have a file structure on a web page, and look for a solution for the following scenario:
The chosen file should be downloaded in browser cache and opened (if it's an excel document, open with excel, etc.).
Now when the user changes the file, it should be detected and the file should be uploaded again.
Is this even possible with JavaScript?
If yes, where do I store the file (temporary internet folder?) and how do I detect the changes?
The only way for this to work you would need to have the user select the downloaded file, and then check for modification.
HTML
<label for="excelFile">Select the excel file: </label><input type="file" id="excelFile" />
JS
//Change event to detect when the user has selected a file
document.querySelector("#excelFile").addEventListener("change",function(e){
//get the selected file
var file = this.files[0];
//get the last modified date
var lastModified = file.lastModified;
//check lastModified against stored lastModified
//this assumes you store the last mod in localStorage
if(localStorage['excelLastMod'] < lastModified){
//It has modified update last mod
localStorage['excelLastMod'] = lastModified;
//do upload
}
});
If you know your user is using Chrome you can use Chrome's FileSystem api
The way you describe it: No, that is not possible in JavaScript.
It sounds like you want an FTP client.
When the user changes the file, it should be detected and the file should be uploaded again.
That is not possible due to JS having almost no access to the file system.
The only way you can access a file at all is by requesting the user to select one, see:
How to open a local disk file with Javascript?
So the most you could do would be:
File is downloaded.
Based on browser & settings, file may be opened automatically, or not.
User is presented with a file selection dialog that they can use when they are done editing.
Compare selected file to file on server and upload if changed.
After downloading a file, you have no control over it.
For applications that have a protocol registered (such a steam://, for example), you might be able to request the URL being opened in a program, but that would require an if per file type/program.
Detecting file changes is not at all possible (because you have no access to the file), and uploading again requires the user to select the file manually, using a file dialog.
Thanks for your help and ideas. I saw a software (https://www.group-office.com/) which includes this function so there has to be way to do it.
New Idea, using chrome filesystem api (#Siguza already said it):
Create file from servers database on users local filesystem with filesystem api
open file locally (should work with filesystem:http://www.example.com/persistent/info.txt, or?)
poll last changes of file every x seconds
if change detected, upload file back to servers database
I saw some problems with excel locking the files Check if file has changed using HTML5 File API
but except of that this should work, shouldn't it?
I'm developing some Behat 3 (with Selenium 2) tests for a site that uses FileSaver.js to convert some DOM elements into an XML file for download. (I can't change that bit, so don't ask.)
When I click a button, JavaScript fires that passes the data blob into FileSaver.js, which converts that to an actual file and activates your browser's usual file download dialog. From there, the user clicks ok and says where to save the file.
I would like to test this process, but I can't figure out how to actually click that OK button or select where to save the file.
I'm fairly new to Behat, but the closest I could find to this is another SO question that deals with a file download (How to test file download in Behat), but because this blob file is created on the fly, I can't just use Guzzle to pull the file from the server.
Any suggestions?