I have a simple app don’t have a DB because there is no data I need to save other than some small things. For example: access tokens or functions that have timeouts that will trigger things in the future.
I can simply store them in a variable, but what if the server is being reset?
I am hosting on Vercel, is it something that might happen?
Are there good alternatives to save this kind of data without a DB?
Related
I'm developing an app for Firefox OS and I need to retrieve/sent data from/to my DB. I also need to use this data in my logic implementation which is in JS.
I've been told that I cannot implement PHP in Firefox OS, so is there any other way to retrieve the data and use it?
PS: This is my first app that I'm developing, so my programming skills are kind of rough.
You can use a local database in JS, e.g. PouchDB, TaffyDB, PersistenceJS, LokiJS or jStorage.
You can also save data to a backend server e.g. Parse or Firebase, using their APIs.
Or you can deploy your own backend storage and save data to it using REST.
You should hold on the basic communication paradigms when sending/receiving data from/to a DB. In your case you need to pass data to a DB via web and application.
Never, ever let an app communicate with your DB directly!
So what you need to do first is to implement a wrapper application to give controlled access to your DB. Thats for example often done in PHP. Your PHP application then offers the interfaces by which external applications (like your FFOS app) can communicate with the DB.
Since this goes to very basic programming knowledge, please give an idea of how much you know about programming at all. I then consider offering further details.
It might be a bit harder to do than you expect but it can be easier than you think. Using mysql as a backend has serious implication. For example, mysql doesn't provide any http interfaces as far as I know. In other words, for most SQL based databases, you'll have to use some kind of middleware to connect your application to the database.
Usually the middleware is a server that publish some kind of http api probably in a rest way or even rpc such as JSONrpc. The language in which you'll write the middleware doesn't really matter. The serious problem you'll face with such variant is to restrict data. Prevent other users to access data to which they shouldn't have access.
There is also an other variant, I'd say if you want to have a database + synchronization on the server. CouchDB + PouchDB gives you that for free. I mean it's really easy to setup but you'll have to redesign some part of your application. If your application does a lot of data changes it might end up filling your disks but if you're just starting, it's possible that this setup will be more than enough.
Is it really helpfull to load data from local database creating using pouchDB ?
please share experience if you used pouchDB. pros n cons.
We have a website which load 1,00,000 records on page load, and then perform many query on this data,
What I did : Create database using their getting-started guide : http://pouchdb.com/getting-started.html
Is is possible something like wild card query on this?
For 1,000,000 documents that the user is simply querying, syncing all of them to the client first sounds like it might be overkill. That's a huge amount of data for your application to wait for at page start.
What you may be interested in trying, though, is storing your data on CouchDB, querying the remote CouchDB, and then selectively syncing documents as needed to the client using filtered replication. It really depends on how badly you need sync, though, and if the user is ever going to modify those documents and need the changes to be synced back.
Well, since it's json you can do the queries in javascript. You could start by using localStorage, and move on to using pouchDB if you need more space or need the other functions provided by pouchDB. But if you just want to be able to filter/search records that you've already retrieved on page load, then you can write your filtering logic with javascript.
I'm a Meteor newbie. I came across the Meteor Streams package which allows for "Real Time messaging for Meteor". Here's what it can do:
Meteor Stream is a distributed EventEmitter across meteor. It can be managed with filters and has a good security model (Inherited from existing meteor security model). You can create as many as streams you want, and it is independent from mongo.
With Meteor Streams, you can communicate between
client to clients
server to clients
client to server
server to servers
There's an example of it being used in a realtime blackboard where users can draw together. Being a Meteor newbie, out of ignorance, I ask, what is the difference between using something like this and just updating a session ie Session.set, Session.get. As I've seen session being used, two browsers can be open and updated with the same information with Session.set. So in an environment where two people are drawing, why can't it just be done using Session setting vs Meteor collections or Streams? what is it I'm not understanding about Session setting? I believe I'm probably wrong thinking session setting could be used instead, I just would like to know why. It will help me to understand Sessions in Meteor and the Meteor Streams package.
A Session variable is a quick variable created so you can reactively change stuff in templates.
E.g you have this in your template:
<template name="hello">
{{message}}
</template>
With it a template helper
Template.hello.message = function() { return Session.get("message") }
If you do something like Session.set("message", "Hi there"), the html will say Hi there. The idea is that you can easily change your HTML using this. Its a type of one way data binding.
A Meteor stream helps you communicate between the browser and server (or other combinations between servers and clients) so you can send messages to and fro, but it wont help you change the HTML.
Likewise the Session wont help you communicate between the browser and server, it can help change HTML when you have a result in your events or pass data reactively between your javascript code and the html that the user sees.
With the blackboard example, you can share the data the other users have drawn, but it wont help you draw onto your blackboard. (In the case of blackboard you could use streams because you're updating a canvas with javascript, so you don't need a session). You cant use Session on its own (or dont need to since its a canvas) - but you need Meteor streams to communicate to the other users.
You could use stuff like JQuery to update your html too! Using a Session is by far the easiest though, because you can use it all over and only have to update one thing for all the rest to change.
I am planning to use Qualtrics REST API, in order to get the data collected from a survey. Can i still retain Meteor's reactivity directly thru the rest api or should I save the data from the rest API into MongoDB to enable for real time updates within the app?
Any advice and further reading will be great.
This will sound like a noob question probably but I am just starting off with Meteor and JS as server side code and never used a web api before.
It entirely depends on what you do with the data it returns. Assuming you're either polling periodically or the API has some kind of push service (I've never heard of it before, so I have no idea), you would need to store the data it returns in a reactive data source: probably a Collection or Session variable, depending on how much persistence is required. Any Meteor templates that access these structures have reactivity built in, as documented here.
Obviously, you will probably need to be polling the API at an appropriately regular interval for this set up to work though. Take a look at Meteor.setInterval, or the meteor-cron package, which is probably preferable.
I have made an application that is receiving location data from a few web clients on regular intervals. I made a quick implementation using couchdb, but as couchdb creates a new revision for each update and the data is quite frequently updated, it consumed a lot of disk space whereas the historic data was of little significance. I looked into MongoDB instead, but as I was thinking how I could make the MongoDB implementation, I had another idea:
The global object is in the process scope, so it can be used to share data between sessions. Persistence beyond session is not required, so I dropped the database completely and stored all the data in the global object (and persisted some data for user convenience in the HTML5 localStorage using javascript). The complexity of the backend was greatly reduced, and the solution felt somewhat elegant, but I still feel like I need to take a shower...
So to my question: Are there any obvious pitfalls with this solution that I haven't thought about?
Congrats you have rediscovered memcache. (I did it twice)
If you need to store this data then you actually should to save it to db, because server app restart will erase all data from RAM. So actually is better to use memcache and asynchronous writing to db.