Using pouchdb-load with a local file - javascript

I’m using react to build a .html page facilitating the access to a complicated local file system for the place I work at. This needs to be editable by anyone. I’ve decided to use pouchdb to handle a database which is storing all my buttons with the link attached to them.
This needs to be deployed on multiple computers. The command npm run build works perfectly fine, however, the button database is not shared between the computers, since the pouchdb database is stored in the localstorage.
So I came up with a way to dump the pouch database in a .json file.
The dumping procedure works fine, however when I try to get the data from the .json file using pouchdb-load plugin, I get a CORS error.
this.db.load('file:///./tracker.json').then(function () {
// done loading!
console.log(this.db)
}).catch(function (err) {
// HTTP error or something like that
console.log(err)
});
I get an undefined object which is related to a CORS error:
Access to XMLHttpRequest at 'file:///tracker.json' from origin 'null' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, chrome-untrusted, https.
message: undefined
name: "unknown"
status: 0
When I omit the 'file:///' behind the file path i get the following :
yntaxError: Unexpected token < in JSON at position 0
at JSON.parse (<anonymous>)
at index.js:16
at Array.forEach (<anonymous>)
at parseDump (index.js:12)
at loadString (index.js:32)
at index.js:87
at onSuccess (index-browser.js:288)
at index-browser.js:330
at XMLHttpRequest.xhr.onreadystatechange (index-browser.js:198)
The project needs to be build as local .html file since we need to be able open links to files.
What could I do to make pouchdb-load work in such a configuration ?
I'm quite lost at the moment, so any help is appreciated ! What am I doing wrong? Is there any simple trick to open files from the file system without any action from the user ? Is there any other way that I could store a copy of my pouchdb database on the file system ? And if so, how could I retrieve it?

Indeed the dreaded CORS error, what a pain! Of course CORS is most necessary, without which local file exploitations would be rife.
The obvious solution is to serve the file over HTTP, either locally - nodejs comes to mind - or serve from a remote server. Here I offer a workaround which is suitable for this use case, maybe not for others.
This solution hinges on two facts: The browser allows the loading of local script files relative to the local document, and pouchdb-load supports loading a database from a string.
For the benefit of those starting from scratch, let's start with creating a dump of a CouchDB database. I used curl and jq1,2 to make a dump suitable for import
curl -X GET "http://localhost:5984/sratch/_all_docs?include_docs=true"
| jq-win64 "{"""docs""": [.rows[].doc]}" > db.js
(Above from a VSCode terminal on windows - hence the triple quotes 😒)
OK so now we have an .js file, which for me looks like this
{
docs: [
{
_id: "a369bc7329751c1642476e752b000f8d",
_rev: "1-6a0a8c1a35c6bb106fc3c53cdc99c034",
id: "4567",
user: "test2",
address: {
city: "New York",
},
},
{
_id: "a369bc7329751c1642476e752b00290e",
_rev: "1-29508c896c6c773775c2c46f67714aad",
id: "4568",
user: "test2",
info: {
city: "Atlanta",
},
},
// etc....
Of course this is a JSON file with a .js extension. Why? Let's change the raw JSON into a variable with a quick edit of the dump file:
let CANNED_LOCAL_DB = {
docs: [
{
_id: "a369bc7329751c1642476e752b000f8d",
// etc....
Excellent, the database is now a JavaScript object. Let's load it.
Bare bones HTML:
<html>
<body>
<script src="./db.js"></script>
<script src="./scripts/pouchdb.js"></script>
<script src="./scripts/pouchdb.memory.js"></script>
<script src="./scripts/pouchdb.load.js"></script>
<script src="./scripts/index.js"></script>
</body>
</html>
Here's the index.js code, proving the concept.
// "local" db
const ldb = new PouchDB("ldb", { adapter: "memory" });
// load the database from the object, stringified
ldb.load(JSON.stringify(CANNED_LOCAL_DB))
.then(() => {
CANNED_LOCAL_DB = undefined;
return ldb.info();
})
.then((info) => {
console.log(JSON.stringify(info, undefined, 3));
})
.catch((err) => {
console.log(err.toString());
});
The canned db object is set to undefined after load - no reason to let it linger in memory.
Do understand db.js is loaded synchronously, which may have an ugly impact with massive canned db's. This may be worked around, but is beyond the scope of this answer.
1 CouchDB dump to file and load from file
2 jq, a lightweight and flexible command-line JSON processor

Related

Is it possible to link a random html site with node javascript?

Is it possible to link a random site with node.js, when I say that, Is it possible to link it with only a URL, if not then I'm guessing it's having the file.html inside the javascript directory. I really wanna know if it's possible because the html is not mine and I can't add the line of code to link it with js that goes something like (not 100% sure) <src = file.html>
I tried doing document = require('./page.html'); and ('./page') but it didn't work and when I removed the .html at the end of require it would say module not found
My keypoint is that the site shows player count on some servers, and I wanna get that number by linking it with js and then using it in some code which I have the code to (tested in inspect element console) but I don't know how to link it properly to JS.
If you wanna take a look at the site here it is: https://portal.srbultras.info/#servers
If you have any ideas how to link a stranger's html with js, i'd really appreciate to hear it!
You cannot require HTML files unless you use something like Webpack with html-loader, but even in this case you can only require local files. What you can do, however, is to send an HTTP Request to the website. This way you get the same HTML your browser receives whenever you open a webpage. After that you will have to parse the HTML in order to get the data you need. The jsdom package can be used for both steps:
const { JSDOM } = require('jsdom');
JSDOM.fromURL('https://portal.srbultras.info/')
.then(({ window: { document }}) => {
const servers = Array.from(
document.querySelectorAll('#servers tbody>tr')
).map(({ children }) => {
const name = children[3].textContent;
const [ip, port] = children[4]
.firstElementChild
.textContent
.split(':');
const [playersnum, maxplayers] = children[5]
.lastChild
.textContent
.split('/')
.map(n => Number.parseInt(n));
return { name, ip, port, playersnum, maxplayers };
});
console.log(servers);
/* Your code here */
});
However, grabbing the server information from a random website is not really what you want to do, because there is a way to get it directly from the servers. Counter Strike 1.6 servers seem to use the GoldSrc / Source Server Protocol that lets us retrieve information about the servers. You can read more about the protocol here, but we are just going to use the source-server-query package to send queries:
const query = require('source-server-query');
const servers = [
{ ip: '51.195.60.135', port: 27015 },
{ ip: '51.195.60.135', port: 27017 },
{ ip: '185.119.89.86', port: 27021 },
{ ip: '178.32.137.193', port: 27500 },
{ ip: '51.195.60.135', port: 27018 },
{ ip: '51.195.60.135', port: 27016 }
];
const timeout = 5000;
Promise.all(servers.map(server => {
return query
.info(server.ip, server.port, timeout)
.then(info => Object.assign(server, info))
.catch(console.error);
})).then(() => {
query.destroy();
console.log(servers);
/* Your code here */
});
Update
servers is just a normal JavaScript array consisting of objects that describe servers, and you can see its structure when it is logged into the console after the information has been received, so it should not be hard to work with. For example, you can access the playersnum property of the third server in the list by writing servers[2].playersnum. Or you can loop through all the servers and do something with each of them by using functions like map and forEach, or just a normal for loop.
But note that in order to use the data you get from the servers, you have to put your code in the callback function passed to the then method of Promise.all(...), i.e. where console.log(servers) is located. This has to do with the fact that it takes some time to get the responses from the servers, and for that reason server queries are normally asynchronous, meaning that the script continues execution even though it has not received the responses yet. So if you try to access the information in the global scope instead of the callback function, it is not going to be there just yet. You should read about JavaScript Promises if you want to understand how this works.
Another thing you may want to do is to filter out the servers that did not respond to the query. This can happen if a server is offline, for example. In the solution I have provided, such servers are still in the servers array, but they only have the ip and port properties they had originally. You could use filter in order to get rid of them. Do you see how? Tell me if you still need help.

Why does my code that attempts to copy files from the root directory of Firebase Storage to a new folder not work?

I am trying to copy or move files from the root in Firebase Storage to a folder. More specifically, from users/displayName/uid/ to users/displayName/uid/somefolder. I have read that there is no method in the Firebase Storage API to make a copy of a file that you've already uploaded and that you will have to download the data and re-upload it. I could not find any sample codes, however. Therefore, I have written the following code to try to accomplish that but it does not work. See the error below to find out why.
Here's the code I have written:
const listRef = firebase.storage().ref(`users/${this.state.displayName}/${this.state.uid}`)
listRef.listAll().then((res) => {
res.items.forEach((itemRef) => {
firebase.storage().ref(`users/${this.state.displayName}/${this.state.uid}/somefolder`).put(itemRef)
});
}
).catch(function (error) {
console.log(error)
});
So, this code does not work and nothing happens. No folder is created. Here's the error that comes up:
FirebaseStorageError {code_: "storage/invalid-argument", message_: "Firebase Storage: Invalid argument in `put` at index 0: Expected Blob or File.", serverResponse_: null, name_: "FirebaseError"}
code: (...)
code_: "storage/invalid-argument"
message: (...)
message_: "Firebase Storage: Invalid argument in `put` at index 0: Expected Blob or File."
name: (...)
name_: "FirebaseError"
serverResponse: (...)
serverResponse_: null
__proto__: Object
By all appearances, the put method expects a file or blob and not the items that you get with the listAll() method.
Any ideas on how to fix this issue and move or copy a file to a folder successfully? Real code samples will be appreciated. Thank you.
You can't pass a reference put(). According to the API documentation, there are three different things you can pass: a Blob, an Uint8array, or an ArrayBuffer.
The problem, however, is that the javascript web client SDK doesn't give you a way of directly downloading data in any of those formats. You're going to have to figure out how to do that yourself, perhaps by getting a download URL, or offloading that work to a backend that can use a server SDK to copy data around.

Can't Get CloudKit to Authenticate (Using Javascript and a Server-to-Server Key)

I'm trying to use Apple's cloudkit.js file to make a server-to-server connection to CloudKit. However, despite messing with the config for several hours, I can't seem to get CloudKit to consider my request valid.
My config logic is pretty straightforward:
const privateKeyFile = `${__dirname}/eckey.pem`;
const keyID = '*some key ID*';
const config = {
containers: [
{
containerIdentifier: 'iCloud.com.*someNameSpace.someProject*',
environment: 'development',
serverToServerKeyAuth: { keyID, privateKeyFile },
},
],
services: {
fetch,
},
};
CloudKit.configure(config);
const container = CloudKit.getDefaultContainer();
container.setUpAuth();
However, that final container.setUpAuth(); always results in {"uuid":"*some UUID*","serverErrorCode":"AUTHENTICATION_FAILED","reason":"no auth method found".
I've even tried adding console.log lines inside cloudkit.js itself to log what gets sent to CloudKit, and it does appear to have the correct headers:
_host: 'https://api.apple-cloudkit.com',
_path: '',
_params: { ckjsBuildVersion: '17AProjectDev77', ckjsVersion: '1.2.0' },
_headers:
{ 'content-type': 'text/plain',
'X-Apple-CloudKit-Request-ISO8601Date': '2017-03-03T00:52:48Z',
'X-Apple-CloudKit-Request-KeyID': '*my key*',
'X-Apple-CloudKit-Request-SignatureV1': '*what looks like a valid signature*' },
_body: undefined,
_method: 'GET',
_wsApiVersion: 1,
_containerIdentifier: '*my container identifier*',
_containerEnvironment: 'development',
_databaseName: 'public',
_apiModuleName: 'database',
_responseClass: [Function: a],
_apiEntityName: 'users',
_apiAction: 'current' }
I've lost hours to trying to grok the (minnified) cloudkit.js file, and I still have no idea why things aren't working. Any help at all, even just debugging ideas, would be greatly appreciated.
P.S. I'm sure I have access to the CloudKit container because when I go to https://icloud.developer.apple.com/dashboard/ it's there in the URL https://icloud.developer.apple.com/dashboard/#*myContainerId*.
Apparently there are two (or more?) versions of cloudkit.js. Because Apple goes out of their way not to use NPM like every other company on the planet (even though they use it in their CloudKit instructions), and because the documentation for server-to-server CloudKit connections is sort of an afterthought, it's very easy to wind up with the web version when you really want the Node one. And (unless you're really good at reading minified code) there's no way to tell just by looking at it.
It turns out I had somehow gotten the web-focused version of cloudkit.js, and I needed the Node-focused version. After I repeated the process listed here I got that version and was finally able to get my CloudKit code working.
Moral of the story: if the company providing your API libraries is too stupid/arrogant/whatever to use npm like everyone else, you have to be very careful not to shoot yourself in the foot. Be certain you're following the Node-specific instructions linked above.

Session cookies not working in Electron

I'm looking at implementing a login system in an Electron[0] application which I'm building but getting stuck on the part of handling the session. Basically I want to store the users session so it is persisted between application restarts (if "Remember me" is enabled).
I have to make use of an existing back-end which works with cookie authentication and I'm not able to change anything there.
From the Electron documentation on the Session object[1] I gathered that I should be using a partition like f.e. persist:someName in order to have a persistent storage, but this is not persisted between application restarts as it seems.
The way I currently set the cookie is as follows:
// main-process/login.js
const session = require('electron').session;
const currentSession = session.fromPartition('persist:someName').cookies;
currentSession.set({
name: 'myCookie',
url: 'https://www.example.com',
value: 'loggedin=1',
expirationDate: 1531036000
}, function(error) {
console.log('Cookie set');
if (error) {
console.dir(error);
}
});
After running this, I see the Cookie set output, but when restarting the app and running the following code:
// main.js
const session = require('electron').session;
const currentSession = session.fromPartition('persist:someName').cookies;
currentSession.get({}, function(error, cookies) {
console.dir(cookies);
if (error) {
console.dir(error);
}
});
The output returned is [].
Any pointers as to what I'm doing wrong or need to do differently would be highly appreciated!
[0] http://electron.atom.io
[1] http://electron.atom.io/docs/api/session/
An alternative might be to take a look at electron-json-storage. Using this plugin, you can write JSON to a system file throughout the user experience and then recall that file on the application load to replace the user "state".

MeteorJS what to put under /server folder?

I'm following the Discover Meteor book and the book suggets put js file under /collections folder. The js file defines server side methods:
Meteor.methods({
post: function(postAttributes) {
var user = Meteor.user();
var postWithSameLink = Posts.findOne({url: postAttributes.url});
// ensure the user is logged in
if (!user)
throw new Meteor.Error(401, "You need to login to post new stories");
// ensure the post has a title
if (!postAttributes.title)
throw new Meteor.Error(422, 'Please fill in a headline');
// check that there are no previous posts with the same link
if (postAttributes.url && postWithSameLink) {
throw new Meteor.Error(302,
'This link has already been posted',
postWithSameLink._id);
}
// pick out the whitelisted keys
var post = _.extend(_.pick(postAttributes, 'url', 'title', 'message'), {
userId: user._id,
author: user.username,
submitted: new Date().getTime(),
commentsCount: 0,
upvoters: [],
votes: 0
});
var postId = Posts.insert(post);
return postId;
},
});
Well, in that case, isn't the whole logic accessible to public since Meteor gathers all JavaScript files in your tree, with the exception of the server, public, and private subdirectories, for the client.?
Is this a concern?
What should I put to server folder?
There are many server-only things that might go in that folder – subscriptions, allow rules, cron jobs etc.
The server folder is the only place from which your code is not visible to general public. Naturally, it's the place where you should put your security-related code: allow / deny rules, accounts configuration etc. If you talk to an external API and want to place your access keys in code, /server folder is the only acceptable place to do so.
Putting your server-side logic in an accessible folder is not a critical issue, as server-side code cannot be altered from the client. The only security concern here is that someone might study your code and find a breech if you did a shortcut somewhere. Also, you're clogging the connection with code that's not needed in the client.
I'd also say that most methods should go in /server, but that depends on what you need. Making methods code accessible on the client allows to take advantage of latency compensation (here, third paragraph below the example), but you need to make sure yourself that the client-side simulation doesn't produce side effects that would interfere with the actual (server-side) modifications.
In your case, you can put this method in /collections or /model folder and don't worry about this. You can also put it in /server and things will work as well.

Categories