Does Meteor retain reactivity when using a REST API - javascript

I am planning to use Qualtrics REST API, in order to get the data collected from a survey. Can i still retain Meteor's reactivity directly thru the rest api or should I save the data from the rest API into MongoDB to enable for real time updates within the app?
Any advice and further reading will be great.
This will sound like a noob question probably but I am just starting off with Meteor and JS as server side code and never used a web api before.

It entirely depends on what you do with the data it returns. Assuming you're either polling periodically or the API has some kind of push service (I've never heard of it before, so I have no idea), you would need to store the data it returns in a reactive data source: probably a Collection or Session variable, depending on how much persistence is required. Any Meteor templates that access these structures have reactivity built in, as documented here.
Obviously, you will probably need to be polling the API at an appropriately regular interval for this set up to work though. Take a look at Meteor.setInterval, or the meteor-cron package, which is probably preferable.

Related

A REST API to echo same JSON data back for testing purposes

I am currently developing a front-end app with React. During the development, I create some objects and use them to render test data. The app intended to work with a Spring Boot application on the server side. I have performed certain tests to ensure communication between front and back end before, however to simplify my development process I thought about using a RESTful API (that is ideally available online as a free testing service) where I would send JSON objects and receive the same object back.
I realize that this sounds counter-intuitive, but here is my reasoning:
I already create my own data, but creating a temporary API just to test would be time loss.
I don't mind still having to pollute my front-end with the data I am normally expected to receive from back-end, because I'll be more aware of the network interaction of my components while implementing them.
So the point is not exactly the data we fetch, but the way we fetch it. Currently I won't be working with our own back-end application since it is just too bloated/incomplete to work with. Using publicly available test APIs with their predetermined data types seems infeasible, because I happen to work with a specific data type that has a lot of custom and necessary fields.
I made some quick searching, but couldn't find an API like that. I could create a fast REST API locally, but that would be far from ideal in my case given that on a realistic scenario I'll have the delay and slightly different asynchronous nature of network interaction, not to mention CORS related configurations etc.
To be short, my question is as follows:
Is it a known practice to use such API's that receives POST requests and responds same objects back (although it sounds weird)? Is there any service that you could recommend for me to use?
Thanks in advance.
There's another ECHO api that's a little easier to use by zuplo.com at https://echo.zuplo.io - you don't need to add the method to the URL, it just echoes everything, including the method, back at you.
There are Postman echo APIs to do exactly the same.
The API echoes back what you sent it, including each of the data items you included in the request as part of the response postman-echo.com/get, postman-echo.com/post
etc.
Please check this link for more details learning.postman.com/docs/developer/echo-api
And about this part of the question : "Is it a known practice to use such API's that receives POST requests and responds same objects back?"
Yes we do this to test the code using these echo APIs. All these are based on the requirements of the problem at hand.

Understanding Apollo client caching and optimistic UI in AWS AppSync JavaScript SDK

I am trying to implement caching in Apollo client with the AWS AppSync JavaScript SDK but I am struggling to understand first the best way to use the cache and second what if any changes I need to make to adapt the Apollo V2 tutorials to work with the AppSync SDK.
With regards to using the cache, I have a list of objects that I get, I then want to view and modify a single object from this list. There are lots of tutorials on how to update something in a list, but I would rather run a second query that gets a single object by its ID so that the page will always work without having to go through the list first.
Is the cache smart enough to know that object X got through queries Y and Z is the same object and will be updated at the same time? If not, is there any documentation on how to write an update that will update the object in the list and by itself at the same time?
If no documentation exists then I will try and work it out on my own and post the code (because it will most likely not work).
With regards to the second question I have got the application working and querying the API using Amplify for authentication but I am unsure as to how to correctly implement the cache. Do I need to specify the cache when creating the client or does the SDK have a built-in cache? How do I access the cache? Is it just by querying the client as in these tutorials? https://www.apollographql.com/docs/react/advanced/caching.html
I am going to answer your second question first:
With regards to the second question I have got the application working and querying the API using Amplify for authentication but I am unsure as to how to correctly implement the cache. Do I need to specify the cache when creating the client or does the SDK have a built-in cache? How do I access the cache? Is it just by querying the client as in these tutorials?
Ok. So here is where is gets a little hairy -- it looks like AppSync was deployed at a time when the major client libraries for GraphQL (Apollo, Relay, etc.) were getting an overhaul, so AWS actually created a wrapper around the Apollo Client (probably for stable API purposes) and then exposed their own way of doing things. Just a quick rundown through the code looks like they have their own proprietary and undocumented way of doing things that involves websockets, their authentication protocols, a redux store, offline functionality, ssr, etc). Thus, if it is not explicitly explained here or here, you're in uncharted territory.
Fortunately, all of the stuff that they provided (and much, much more) has now been implemented in the underlying Apollo Client in a documented way. Even more fortunately, it looks like the AppSync client forwards most of the actual GraphQL related stuff directly to the internal Apollo Cache and allows you to pass in config options under cacheOptions, so most of the configuration you can do with the Apollo Client you can do with the AppSync Client (more below).
Unfortunately, you cannot access the cache directly with the AppSync client (they've hidden it to make sure their public API remains stable in the fluctuating ecosystem). However, if you really need more control, most of the stuff they have implemented in the AppSync client could easily be replicated in your own instantiation of an Apollo Client wherein you'd unlock full control (you can use the open-source AppSync code as a foundation). Since GraphQL frontends and backends are decoupled, there is no reason why you couldn't use your own Apollo Client to connect with the AppSync server (for a large, serious project, this is what I would do as the Apollo Client is much better documented and under active development).
Is the cache smart enough to know that object X got through queries Y and Z is the same object and will be updated at the same time? If not, is there any documentation on how to write an update that will update the object in the list and by itself at the same time?
This first part pertains to both the Apollo Client and the AppSync client.
Yes! That's one of the great things about Apollo client - every time you make a query it tries to update the cache. The cache is a normalized key-value store where all of the objects are stored at the top level which the key being a combination of the __typename and id properties of the object. The Apollo client will automatically add __typename to all of your queries (though you will have to add id to your queries manually - otherwise it falls back to just the query path itself as the key [which is not very robust]).
The docs provide a very good overview of the mechanism.
Now, you may need to do some more advanced stuff. For example, if your GraphQL schema uses some unique object identifier other than id, you'll have to provide some function to the dataIdFromObject that maps to it.
Additionally, sometimes when making queries, it is difficult for the cache to know exactly what you are asking for to check the cache before making a network request. To alleviate this problem, they provide the cache redirect mechanism.
Finally, and perhaps most complicated, is how to work with updating the order of stuff in paginated queries (e.g., anything that is in an ordered list). To do this, you'll have to use the #connection directive. Since this is based on the relay connection spec, I'd recommend giving that a skim.
Bonus: To see the cache in action, I'd recommend the Apollo client dev tools. It's a little buggy, but it will at least give you some insight into what it actually happening to the cache locally -- this will not work if using AppSync.
So besides the above information which is all about setting up and configuring the cache, you can also control the data and access to the cache during the runtime of your app (if using Apollo Client directly and not the AppSyncClient).
The Direct Cache Access docs specify the available methods. However, since most of the updates happen automatically just based on the queries you make, you shouldn't have to use these often. However, one use for them is for complicated UI updates. For example, if you make a mutation that deletes an item from a list, instead of requerying for the entire list (which would update the cache, though at the expense of more network data, parsing, and normalization) you could defined a custom cache update using readQuery/writeQuery and the update mutation option. This also plays nicely with optimisticResponse which you should use if you're looking for optimistic UI.
Additionally, you can choose whether you want to use or bypass the cache (or some more advanced strategy) using one of the fetchPolicy or errorPolicy options.

Creating a Node.js dashboard based on a MySQL DB without a poller

I've read a few StackOverflow posts related to this subject but I can't find anything specifically helps me in my scenario.
We have multiple monitoring instances within our network, monitoring different environments (Nagios, Icinga, more...). Currently I have a poller script written in PHP which runs every minute via cron, it asks the instance to return all of its problems in JSON, the script then interprets this and pushes it in to a MySQL database.
There is then an 'overview' page which simply reads the database and does some formatting. There's a bit of AJAX involved, every X seconds (currently use 30) it checks for changes (PHP script call) and if there are changes it requests them via AJAX and updates the page.
There's a few other little bits too (click a problem, another AJAX request goes off to fetch problem details to display in a modal etc).
I've always been a PHP/MySQL dev, so the above methodology seemed logical to me and was quick/easy to write, and it works 'ok'. However, the problems are: database constantly being polled by many users, mesh of javascript on the front end doing half the logic and PHP on the back doing the other half.
Would this use case benefit from switching to NodeJS? I've done a bit of Node.JS before but nothing like this. Can I subscribe to MySQL updates? Or trigger them when a 'data fetcher' pushes data in to the database? I've always been a bit confused as I use PHP to create data and javascript to 'draw' the page, is there still a split of NodeJS doing logic and front end javascript creating all the elements, or does NodeJS do all of this now? Sorry for the lack of knowledge in this area...
This is definitely an area where Node could offer improvements.
The short version: with websockets in the front-end and regular sockets or an API on the back-end you can eliminate the polling for new data across the board.
The long version:
Front-end:
You can remove all need for polling scripts by implementing websockets. That way, as soon as new data arrives on the server, you can broadcast it to all connected clients. I would advise Socket.io or the Primus websocket wrapper. Both are very easy to implement and incredibly powerful for what you want to achieve.
All data processing logic should happen on the server. The data is then sent to the client and should be rendered on the existing page, and that is basically the only logic the client should contain. There are some frameworks that do all of this for you (e.g. Sails) but I don't have experience with any of those frameworks, since they require you to write your entire app according to their rules, which I personally don't like (but I know a lot of developers do).
If you want to render the data in the client without a huge framework, I highly recommend the lightweight but incredibly useful Transparency rendering library. Using this, you can format a Javascript object on the server using Node, JSONify it, send it to the client, and then all the client would have to do is de-JSONify it and call Transparency's .render.
Back-end:
This one depends on how much control you have over the behaviour of the instances you need to check. I assume you have some control, since you can get all their data in a nice JSON format. So, there are multiple options.
You can keep polling every so often. This is the easiest solution since it requires no change to the external services. The Javascript setInterval function is very useful here. Depending on how you connect with the instances, you might be able to use a module like Request to do the actual request, so that takes out a bunch more of the heavy lifting.
The benefit of implementing the polling in your Node app as well, is that you will receive the data in your Node app and that way you can immediately broadcast it to the clients, even before inserting it into a database. This will greatly reduce the number of queries on your database.
An alternative to polling would be to set up a simple Express-based API where the applications can post their 'problems', as you call them. This way your application will get notified the moment a problem occurs, and combined with the websockets connection to the client this would result in practically real-time updates.
To be more redundant, you would have a polling timer alongside the API, so that you can check the instances in case there's something wrong that causes them to not send over any more data.
An alternative to the more high-level API would be to just use direct socket communication, which is basically the same approach only using a different set of functions.
Lastly, you could also keep the PHP-based polling script. This would be the most efficient solution since you wouldn't go and replace everything. Then from the Node app that's connected to the clients with websockets, you could set an interval to query the database every so often and broadcast the updates. This will still greatly reduce the number of queries, since no matter how many clients are connected there will only be one query, the response of which then gets sent to all connected clients.
I hope my post has give you some ideas of how you could implement your application using Node. Keep in mind though that I am just one developer, this is how I would approach building your application in Node. There will definitely be others who have different opinions.

Dirty update for web based mobile app

I have the folllowing tech stack:
SQLServer/MVC4/WebAPI backend and a HTML5/JqueryMobile frontend. Data transferred via JSON.
I would like to know how I can reduce the data tranferred via JSON. i.e. I don't want get data I already have from the server?
Are there any libraries, or design patters to use or research to help me in this. What is the architecture commonly used to solve this.
To minimise transferred data. You can use local caching.
HTML5 local storage and sessionStorage.
http://www.w3schools.com/html/html5_webstorage.asp
If you will be handling real time data I like using Signal R to enable push updates to active subscribers, avoids the need for polling.
http://www.asp.net/signalr/
Remember to enable your webserver to cache data where appropriate.
Finally good old fashioned analysis of your objects to make sure you are not sending unecessary data. JSON.NET will allow very good control of what items appear in your serialized output.

Using the global object as storage

I have made an application that is receiving location data from a few web clients on regular intervals. I made a quick implementation using couchdb, but as couchdb creates a new revision for each update and the data is quite frequently updated, it consumed a lot of disk space whereas the historic data was of little significance. I looked into MongoDB instead, but as I was thinking how I could make the MongoDB implementation, I had another idea:
The global object is in the process scope, so it can be used to share data between sessions. Persistence beyond session is not required, so I dropped the database completely and stored all the data in the global object (and persisted some data for user convenience in the HTML5 localStorage using javascript). The complexity of the backend was greatly reduced, and the solution felt somewhat elegant, but I still feel like I need to take a shower...
So to my question: Are there any obvious pitfalls with this solution that I haven't thought about?
Congrats you have rediscovered memcache. (I did it twice)
If you need to store this data then you actually should to save it to db, because server app restart will erase all data from RAM. So actually is better to use memcache and asynchronous writing to db.

Categories