I have a web server set up from my arduino and the web page is loaded from it. I use ajax to print the data in the web page.
Similarly I need to store the data in a local pc as a text file. I was successful initially in storing a text file, but the issue is that the data dosent get appended in the same file, there are multiple files created per instance.
Can someone help me in solving this issue. I'm restricted to use only javascript.
client.println("document.getElementById(\"flux_values_1\").innerHTML = this.responseText;");
client.println("var textToSave =[this.responseText]");
client.println("var textToSaveAsBlob = new Blob([textToSave], {type:'text/plain'});");
client.println("var textToSaveAsURL = URL.createObjectURL(textToSaveAsBlob);");
client.println("var fileNameToSaveAs = \"as\";");
client.println("var downloadLink = document.createElement(\"a\");");
client.println("downloadLink.download = fileNameToSaveAs;");
client.println("downloadLink.href = textToSaveAsURL;");
client.println("document.body.appendChild(downloadLink);");
client.println("downloadLink.click();");
Actually you are downloading the file in browser, you cannot append data in a file via browser, you have to do this in backend. Browser will always create a new file and download a new file. This is expected behaviour and cannot be changed.
Related
I am trying to download file using angular js. currently i am sending GET request to do it.
my file is = flower-red.jpg
my request like below
GET http://localhost:8080/aml/downloadDoc/852410507V/flower-red.jpg
this is correctly downloaded. but if file name have spaces, it did not downloaded. Please check this
file = flower - red.jpg
my request like below
GET http://localhost:8080/aml/downloadDoc/852410507V/flower%20-%20red.jpg
this is not download. because request is changed due to spaces in the file name .
how i fixed this issue.
// javascript
var url = "http://localhost:8080/aml/downloadDoc/852410507V/flower - red.jpg";
var urlRemoveSpace = url.split(" ").join("");
// now you can process get request to download using variable urlRemoveSpace
I have a JavaFX application that saves data to a local file data.json which, for example, looks like
data = '[{"name":"Jack","pet":"turtle"},{"name":"John","pet":"black mamba"}]'. Periodically the application adds more entries to this file.
In my html file that I am loading to that application I need to show all this info. I have a script tag that loads that file:
<script type="text/javascript" src="../Data/data.json" id="dataSourceScript"></script>
Then in js code I have var mydata = JSON.parse(data) which allows me to load that JSON into mydata variable as described here.
As I need to update the page content when new entries are added, I have a function I call every couple seconds with setInterval() that does that. In order to get the updated file info, I delete that old <script> tag and add a new one (exactly the same), but this means that data now has the updated info:
var oldScript = document.getElementById("dataSourceScript")
if(oldScript)
oldScript.remove()
var newScript = document.createElement("script")
newScript.setAttribute("id", "dataSourceScript")
newScript.setAttribute("src", "../Data/data.json")
document.body.appendChild(newScript)
var mydata = JSON.parse(data)
//then I just add the new entry to DOM, if there is a new entry
It all works great. If I open my html file in browser and then add a new entry to the file, in a few seconds the page gets updated and shows the new entry too. However, for some reason it does not work in my JavaFX application. It loads the file just once from the initial <script> tag, but if I change data.json file, nothing happens. I have to close the application and reopen it in order to get the new info shown on the page.
(I didn't find any other way to read a file that would work. FileReader just stops reading when a file gets updated, which defeats the purpose; fetch() and XMLHttpRequest() both get blocked by CORS policy; I cannot create a server to request files or install Node or anything else, I need pure html+js to be the UI)
Figured it out thanks to the comments, thanks guys.
Yes, the file I loaded from script tag was being cached and not being updated. A solution is very easy, I just needed to create a variable counter and add it as a version to the new script every time I create it, so it's considered a new one
var version = 0
...
var oldScript = document.getElementById("dataSourceScript")
if(oldScript)
oldScript.remove()
var newScript = document.createElement("script")
newScript.setAttribute("id", "dataSourceScript?version=" + version++)
newScript.setAttribute("src", "../Data/data.json")
document.body.appendChild(newScript)
var mydata = JSON.parse(data)
//then I just add the new entry to DOM, if there is a new entry
Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});
I have a problem (or may be two) with saving files using HTML5 File API.
A files comes from the server as a byte array and I need to save it. I tried several ways described on SO:
creating blob and opening it in a new tab
creating a hidden anchor tag with "data:" in href attribute
using FileSaver.js
All approaches allow to save the file but with breaking it by changing the encoding to UTF-8, while the file (in current test case) is in ANSI. And it seems that I have to problems: at the server side and at the client side.
Server side:
Server side is ASP.NET Web API 2 app, which controller sends the file using HttpResponseMessage with StreamContent. The ContentType is correct and corresponds with actual file type.
But as can be seen on the screenshot below server's answer (data.length) is less then actual file size calculated at upload (file.size). Also here could be seen that HTML5 File object has yet another size (f.size).
If I add CharSet with value "ANSI" to server's response message's ContentType property, file data will be the same as it was uploaded, but on saving result file still has wrong size and become broken:
Client side:
I tried to set charset using the JS File options, but it didn't help. As could be found here and here Eli Grey, the author of FileUplaod.js says that
The encoding/charset in the type is just metadata for the browser, not an encoding directive.
which means, if I understood it right, that it is impossible to change the encoding of the file.
Issue result: at the end I can successfully download broken files which are unable to open.
So I have two questions:
How can I save file "as is" using File API. At present time I cannot use simple way with direct link and 'download' attribute because of serverside check for access_token in request header. May be this is the "bottle neck" of the problem?
How can I avoid setting CharSet at server side and also send byte array "as is"? While this problem could be hacked in some way I guess it's more critical. For example, while "ANSI" charset solves the problem with the current file, WinMerge shows that it's encoding is Cyrillic 'Windows-1251' and also can any other.
P.S. the issue is related to all file types (extensions) except *.txt.
Update
Server side code:
public HttpResponseMessage DownloadAttachment(Guid fileId)
{
var stream = GetFileStream(fileId);
var message = new HttpResponseMessage(HttpStatusCode.OK);
message.Content = new StreamContent(stream);
message.Content.Headers.ContentLength = file.Size;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(file.ContentType)
{
// without this charset files sent with bigger size
// than they are as shown on image 1
CharSet = "ANSI"
};
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = file.FileName + file.Extension,
Size = file.Size
};
return message;
}
Client side code (TypeScript):
/*
* Handler for click event on download <a> tag
*/
private downloadFile(file: Models.File) {
var self = this;
this.$service.downloadAttachment(this.entityId, file.fileId).then(
// on success
function (data, status, headers, config) {
var fileName = file.fileName + file.extension;
var clientFile = new File([data], fileName);
// here's the issue ---^
saveAs(clientFile, fileName);
},
// on fail
function (error) {
self.alertError(error);
});
}
My code is almost the same as in answers on related questions on SO: instead of setting direct link in 'a' tag, I handle click on it and download file content via XHR (in my case using Angularjs $http service). Getting the file content I create a Blob object (in my case I use File class that derives from Blob) and then try to save it using FileSaver.js. I also tried approach with encoded URL to Blob in href attribute, but it only opens a new tab with a file broken the same way. I found that the problem is in Blob class - calling it's constructor with 'normal' file data I get an instance with 'wrong' size as could be seen on first two screenshots. So, as I understand, my problem not in the way I try to save my file, but in the way I create it - File API
Heres the scenario:
User comes to my website and opens a webpage with some javascript functionality.
User edits the data through javascript
User clicks on a save button to save the data, thing is, it seems like they shouldn't need to download this data because its already in javascript on the local machine.
Is it possible to save data from javascript (executing from a foreign webpage) without downloading a file from the server?
Any help would be much appreciated!
For saving data on the client-side, without any server interaction, the best I've seen is Downloadify, is a small JavaScript + Flash library allows you to generate and save files on the fly, directly in the browser...
Check this demo.
I came across this scenario when I wanted to initiate a download without using a server. I wrote this jQuery plugin that wraps up the content of a textarea/div in a Blob, then initiates a download of the Blob. Allows you to specify both file name and type..
jQuery.fn.downld = function (ops) {
this.each(function () {
var _ops = ops || {},
file_name = _ops.name || "downld_file",
file_type = _ops.type || "txt",
file_content = $(this).val() || $(this).html();
var _file = new Blob([file_content],{type:'application/octet-stream'});
window.URL = window.URL || window.webkitURL;
var a = document.createElement('a');
a.href = window.URL.createObjectURL(_file);
a.download = file_name+"."+file_type;
document.body.appendChild(a);
a.click(); $('a').last().remove();
});
}
Default Use : $("#element").downld();
Options : $("#element").downld({ name:"some_file_name", type:"html" });
Codepen example http://codepen.io/anon/pen/cAqzE
JavaScript is run in a sandboxed environment, meaning it only has access to specific browser resources. Specifically, it doesn't have access to the filesystem, or dynamic resources from other domains (web pages, javascript etc). Well, there are other things (I/O, devices), but you get the point.
You will need to post the data to the server which can invoke a file download, or use another technology such as flash, java applets, or silverlight. (i'm not sure about the support for this in the last 2, and I also wouldn't recommend using them, depends what it's for...)
The solution to download local/client-side contents via javascript is not straight forward. I have implemented one solution using smartclient-html-jsp.
Here is the solution:
I am in the project build on SmartClient. We need to download/export data of a grid
(table like structure).
We were using RESTish web services to serve the data from Server side. So I could not hit the url two times; one for grid and second time for export/transform the data to download.
What I did is made two JSPs namely: blank.jsp and export.jsp.
blank.jsp is literally blank, now I need to export the grid data
that I already have on client side.
Now when ever user asks to export the grid data, I do below:
a. Open a new window with url blank.jsp
b. using document.write I create a form in it with one field name text in it and set data to export inside it.
c. Now POST that form to export.jsp of same heirarchy.
d. Contents of export.jsp I am pasting below are self explanatory.
// code start
<%# page import="java.util.*,java.io.*,java.util.Enumeration"%>
<%
response.setContentType ("text/csv");
//set the header and also the Name by which user will be prompted to save
response.setHeader ("Content-Disposition", "attachment;filename=\"data.csv\"");
String contents = request.getParameter ("text");
if (!(contents!= null && contents!=""))
contents = "No data";
else
contents = contents.replaceAll ("NEW_LINE", "\n");
//Open an input stream to the file and post the file contents thru the
//servlet output stream to the client m/c
InputStream in = new ByteArrayInputStream(contents.getBytes ());
ServletOutputStream outs = response.getOutputStream();
int bit = 256;
int i = 0;
try {
while ((bit) >= 0) {
bit = in.read();
outs.write(bit);
}
//System.out.println("" +bit);
} catch (IOException ioe) {
ioe.printStackTrace(System.out);
}
outs.flush();
outs.close();
in.close();
%>
<HTML>
<HEAD>
</HEAD>
<BODY>
<script type="text/javascript">
try {window.close ();} catch (e) {alert (e);}
</script>
</BODY>
</HTML>
// code end
This code is tested and deployed/working in production environment, also this is cross-browser functionality.
Thanks
Shailendra
You can save a small amount of data in cookies.