I've developed a Universal App for Windows. It is deployed in my business, not through the App Store. It's perfectly functional 99.9% of the time.
But there's this one problem that's happened a couple of times.
I store persistent data in JSON format in my application local data folder.
To store my data, I use:
var storeData = function () {
return WinJS.Application.local.writeText('data', JSON.stringify(dataObject));
}
And to load my data, I use:
var loadData = function () {
return WinJS.Application.local.readText('data').then(function (text) {
dataObject = text ? JSON.parse(text) : {};
})
}
There's been a couple of cases where, during the loadData method, my app crashes. When I researched why it crashed, it turns out that there's an extra file in my local appdata folder. Obviously the file that's supposed to be there is called 'data' -- this extra file is called something like 'data-84DB.TMP' or something like that -- obviously a temporary file created as part of the file io API. In these cases where my app is crashing, this file has the information in it that you'd normally expect to see in the 'data' file, and the 'data' file is no longer in text format and when I open it in SublimeText it says '0000'.
So when my loadData function runs, it crashes.
Now, this is not a situation in which I want it to fail silently. I'd rather it crash than, say, just do a try-catch in loadData and make my dataObject empty or something in the cases where the data didn't save right. I'd rather it crash, so that then at least I can go and find the .TMP file and restore the information that didn't properly save.
Is this problem normal? Am I not following best practices for reading and writing persistent data? How can I fix this issue? Do you guys know what causes it? Might it be that the app unexpectedly shuts down in the middle of a file-writing operation?
I think the best practice should be to do reads and saves asynchronously. I guess WinJS.Application.local is a wrapper for WinRT API, but it works synchronously. I would suggest to use Windows.Storage.FileIO.writeTextAsync and Windows.Storage.FileIO.readTextAsync.
Also I found out that JSON.parse crashes (throws an error) when the string to parse is empty (""). In your case I see that it can be a problem as well since you're testing only if it's null or undefined, not if it's empty.
Related
It's hard to isolate this problem, so please bear with me.
I have two files, called index.js and feed.js. They are both run isomorphically.
index.js
const feed = require('./feed');
console.log('feed', JSON.stringify(feed));
console.log('feed.default', feed.default);
(global || window).feed = feed;
feed.js
console.log('inside feed');
export default 5;
On the server, the following gets printed in order:
inside feed
feed {"default":5}
feed.default 5
But on the client the output is
feed {}
feed.default undefined
inside feed
and when I actually check window.feed on the client (this is after all of this has occurred, of course), it's configured properly and has all the right data in it.
Does anyone know why this is or how I can prevent or work around it cleanly? I believe feed should always be populated with the correct stuff synchronously by the statement after the require.
The webpack config is a little complicated, but it's mostly the same as this one.
I'd be willing to believe it's some kind of race condition (the server has already required it), but I don't understand why it would be lazily loaded to begin with. Because I'm using babel-node and react-hot-loader, I can't exactly step through what's happening inside of webpack_require, but I'm pretty sure this isn't the intended behavior so I figure I'm doing something wrong.
You have a bunch of webpack optimization plugins enabled for the client. Do you really know what they do and how they do it? Try disabling them, especially AggressiveMergingPlugin. Make your configuration as simple as possible.
The only time I had a similar issue was when I accidentally had two modules that require each other. Maybe one of the plugins is doing some module reordering magic and causes a similar condition to arise somehow. Just a guess.
Failing that, examine the contents of the bundle that webpack generates. debugger; statements are useful to jump into the right places.
The question is quite simple, but I seriously couldn't find any sample that would demonstrate something I'm trying to achieve, maybe it's me who is didn't get the concept of background tasks in uwp applications (or windows / windows phone 8).
I'm creating an application that is polling some data (traffic incidents) and would like to be able to notify the user about the closes ones even if he is not using the application. So I reckon I would use the background task. (I hope I get that part right).
So in the background task, which I've set to run on timer of 15 minutes, under the condition of "InternetAvailable", I fetch the data asynchronously and once it's done, I'm completing the deferral object as it's required. All works ok.
The question is, what object shall I use in order to persist the data so I could read the data once the application is opened?
I've tried the WinJS.Application.sessionState but that gets lost once the application is ready (opened).
I've tried the Windows.Storage.ApplicationData.current.localSettings but it says there is a type mismatch, apparently I'm trying to put in there the object, if String is expected (I reckon)...
So does anyone know what is the best practice here ?
Thank you
You have full acess to the WinRT API, so you can write a file (and read the same file once the application is opened):
function saveData(dataObj) {
var applicationData = Windows.Storage.ApplicationData.current;
var localFolder = applicationData.localFolder;
return localFolder.createFileAsync("dataFile.txt", Windows.Storage.CreationCollisionOption.replaceExisting).then(function (sampleFile) {
return Windows.Storage.FileIO.writeTextAsync(sampleFile, JSON.stringify(dataObj));
});
}
I've written this function to try and read a file located in the same directory as my Javascript files and index.html file. I've read from files before, but normally I have the user select the file themselves, so I've never had to create the actual file object.
Does anyone know why the code below doesn't work?
function getFile()
{
var reader=new FileReader();
var file=new File("input.txt");
var str=reader.result;
reader.readAsText(file);
return str;
}
Update:
Some additional information (My apologies if I don't answer your questions, I'm really new to this, and everything I know is self-taught).
Server side or client side? I think this it is going to be hosted serverside - I have a domain that I'm going to upload the file to.
Not possible due to security restrictions. You must use a file that was selected by a user. Imagine what would happen if javascript could read any file on your harddrive - nightmare!
I had a similar code for a desktop app written in JS. It worked well on IE and FF until Chrome came along (yes, it was a very long time ago!) and started restricting the sandbox ever more. At some point, IE and FF also tightened permissions and the method didn't work anymore (I used this code which does not work anymore):
try {
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
file = Components.classes["#mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
file.initWithPath(filename);
if (file.exists())
{
inStream = Components.classes["#mozilla.org/network/file-input-stream;1"].createInstance(Components.interfaces.nsIFileInputStream);
inStream.init(file, 0x01, 00004, null);
sInStream = Components.classes["#mozilla.org/scriptableinputstream;1"].createInstance(Components.interfaces.nsIScriptableInputStream);
sInStream.init(inStream);
content = sInStream.read(sInStream.available());
}
} catch (ex) {
// and so on...
}
At some point FF and Chrome had some startup flags that enabled the sandbox to be lax on certain security restrictions, but in the end I realized that this is simply old code.
Today, I use localStorage to store data on the local machine. The only disadvantage is that you have to implement an import/export functionality so that it is portable to other browsers/machines by copy/paste the raw text.
What I do is simply this. The maximum size of the database is said to be about 10 MB (but deviations are known to exist) and all I store is a single JSON string:
function store(data)
{
localStorage.data = JSON.stringify(data);
}
function retrieve()
{
return JSON.parse(localStorage.data);
}
Obviously, these two functions are an oversimplification, you need to handle edge cases like when no data exists or a non-JSON string was imported. But I hope you get the idea on how to store local data on modern browsers.
With the File() constructor you can only create new file objects, but cannot read files from lokal disk.
It is not quite clear what exactly you want to achieve.
You say, that the input.txt is located in the same directory as your html and js. Therefore it is located on the server.
If you want to read a file from the server you have to use an ajax request which is already explained in anoter question.
I am considering to port a highly demanded(lots of traffic) sockets-based architecture from .NET to Node.JS using Socket.IO.
My current system is developed in .NET and use some scripting languages, loaded at runtime, so I can do hot-fixes if needed by issuing a reload command to the server, without having to restart the different servers/dispatcher processes.
I originally built it this way so, like I said, I could do hot fixes if needed and also keep the system available with transparent fixes.
I am new to Node.JS but this is what I want to accomplish:
Load javascript files on demand at runtime, store them in variables somewhere and call the script functions.
What would be the best solution? How to call a specific function from a javascript file loaded at runtime as a string? Can i load a javascript file, store it in a variable and call functions in a normal way just like a require?
Thanks!
If I understood your question correctly. You can check the vm module out.
Or if you want to be able to reload required files, you must clear the cache and reload the file, something this package can do. Check the code, you'll get the idea.
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.
More information can be found here.
Delete the cached module:
delete require.cache[require.resolve('./mymodule.js')]
Require it again. (maybe a require inside a function you can call)
update
Someone I know is implementing a similar approach. You can find the code here.
You can have a look at my module-invalidate module that allows you to invalidate a required module. The module will then be automatically reloaded on further access.
Example:
module ./myModule.js
module.invalidable = true;
var count = 0;
exports.count = function() {
return count++;
}
main module ./index.js
require('module-invalidate');
var myModule = require('./myModule.js');
console.log( myModule.count() ); // 0
console.log( myModule.count() ); // 1
mySystem.on('hot-reload-requested', function() {
module.invalidateByPath('./myModule.js'); // or module.constructor.invalidateByExports(myModule);
console.log( myModule.count() ); // 0
console.log( myModule.count() ); // 1
});
Say I have the following Node program, a machine that goes "Ping!":
var machine = require('fs').createWriteStream('machine.log', {
flags : 'a',
encoding : 'utf8',
mode : 0644
});
setInterval(function () {
var message = 'Ping!';
console.log(message);
machine.write(message + '\n');
}, 1000);
Every second, it will print a message to the console and also append it to a log file (which it will create at startup if needed). It all works great.
But now, if I delete the machine.log file while the process is running, it will continue humming along happily, but the writes will no longer succeed because the file is gone. But it looks like the writes fail silently, meaning that I would need to explicitly check for this condition. I've searched the Stream docs but can't seem to find an obvious event that is emitted when this type of thing occurs. The return value of write() is also not useful.
How can I detect when a file I'm writing to is deleted, so I can try to reopen or recreate the file? This is a CentOS box, if that's relevant.
The writes actually do not fail.
When you delete a file that is open in another program you are deleting a named link to that file's inode. The program that has it open still points to that inode. It will happily keep writing to it, actually writing to disk. Only now you don't have a way to look it at, because you deleted the named reference to it. (If there were other references, e.g. hard links, you would still be able to!).
That's why programs that expect their log files to "disappear" (b/c of logrotate, say) usually support a signal (usually SIGHUP and sometimes SIGUSR1) that tells them to close their file (at which point it is really gone, because now there are no links to it anywhere) and re-create it.
You should consider something like that as well.