Using the Google geolocation object as a session var - javascript

I'm building an app which needs to know where the user is; it displays events within a radius of the user.
I'm using the Google geocoding API and have been saving the returned object as a session variable and passing the info to and fro using ajax to retrieve and update the location.
I've noticed that occasionally the array and object keys will be different. For instance, sometimes it will have a nicely formatted results.geometry.location.lat hierarchy, but then occasionally it will be results.geometry.location.d and sometimes even results.geometry.location.A.
I created a getter in javascript which will return the lat and lng regardless of the keys returned. I'm surprised that the returned objects don't have built in getter functions when the array keys vary like that.
So, I'm wondering if there's a better way to store the user's location than just saving the entire geolocation response. I tried paring it down to just what I want to use, but that means every time I make a call to Google for a location, I need to process it.
Has anyone had any experience getting this sort of thing to work?
Thanks
EDIT:
I'm using cakePHP and I realized I should have made this a datasource from the get-go. I currently have all the logic in a component being called from my controllers. I'm going to spend some time creating a datasource, which should take care of the getting and setting of this mutating object.

Related

Persistant variable with Gmail scripting (Google Apps Script) [duplicate]

In a Google spreadsheet using the Script Editor, I do function calls, but I am not quite sure if the best way to store persistant data (data that I will continue to use) is to use global variables (using objects, arrays, strings), or there is a better way to store data.
I don't want to use cells which could be another way.
Another question, is it possible to create (pseudo) classes in this environment? Best way?
Both ScriptProperties and ScriptDB are deprecated.
Instead, you should be using the new class PropertiesService which is split into three sections of narrowing scope:
Document - Gets a property store that all users can access within the current document, if the script is published as an add-on.
Script - Gets a property store that all users can access, but only within this script.
User - Gets a property store that only the current user can access, and only within this script.
Here's an example persisting a user property across calls:
var properties = PropertiesService.getScriptProperties();
function saveValue(lastDate) {
properties.setProperty('lastCalled', lastDate);
}
function getValue() {
return properties.getProperty('lastCalled');
}
The script execution environment is stateless, so you cannot access local variables from previous runs, but you can store getScriptProperties() in a local variable because it will be re-run for each return trip to the server so it can be called in either method.
If you need to store something on a more temporary basis, you can use the CacheService API
Persistent data can be stored using the Class ScriptProperties:
http://code.google.com/googleapps/appsscript/class_scriptproperties.html
All values are stored as a string and will have to be converted back with the likes or parsInt or parseFloat when they are retrieved.
JSON objects can also be stored in this manner.
My experience has been that every query to retrieve or store values takes a long time. At the very least, I would cache the information in your javascript code as much as possible when it is safe. My scripts always execute all at once, so I don't need to keep global variables as I simply pass the retrieved data arrays around, manipulate them, and finally store them back in one fell swoop. If I needed persistence across script invocations and I didn't care about dropping intermediate values on close of the webpage, then I'd use globals. Clearly you have to think about what happens if your script is stopped in the middle and you haven't yet stored the values back to Google.

Impact of calling this.$scope.$digest();

I would like to know if there is any impact of calling this.$scope.$digest(); after every AJAX download of script. I am using SignalR to get data from server. No sooner the data comes in than I want to have the data painted to the grid. While my functions in controller js execute within a winking time of the eye, the painting to the UI takes around 3 to 4 seconds which is unacceptable.
Angular Batarang says 6.8 ms and 1542 watchers.
How do I optimize the page?
There are two possible reasons for your issues. Either you are retrieving data very often, and trying to redraw it every time it is received. Or you are retrieving a lot of data and trying to update a control in your view that is very large. Here is how you solve both of these problems:
Data retrieval faster than update speed
For this you need to create a buffer between the retrieved data and the $scope data. Basically whenever you receive new data, you should push the changes to a data structure that isn't on scope. This way you can get data as fast as you want and it won't effect rendering. Then you need a heuristic for deciding when you want to redraw the data. This could be based on a timer, or something after data change. Once this condition is true, you copy the data changes over to your $scope object, which will update the view.
- receive data -> write to non scope buffer
- when some condition is met -> write buffer or buffer changes to $scope
Data retrieved is large, and view is large and complex
For this situation, your only option is to somehow simplify the view. With grids, this can be some sort of pagination or limiting constraint. There are plenty of angular grids out there that do these kinds of things, and I would just look for one that suits your situation better .

How to integrate Redux with very large data-sets and IndexedDB

I have an app that uses a sync API to get its data, and requires to store all the data locally.
The data set itself is very large, and I am reluctant to store it in memory, since it can contains thousands of records. Since I don't think the actual data structure is relevant, let's assume I am building an email client that needs to be accessible offline, and that I want my storage mechanism to be IndexedDB (which is async).
I know that a simple solution would be to not have the data structure as part of my state object and only populate the state with the required data (eg - store email content on state when EMAIL_OPEN action is triggered). This is quite simple, especially with redux-thunk.
However, this would mean I need to compromise 2 things:
The user data is no longer part of the "application state", although in truth it is. Since the sync behavior is complex, and removing it from the app state machine will hurt the elegance of the redux concepts (the way I understand them)
I really like the redux architecture and would like all of my logic to go through it, not just the view state.
Are there any best-practices on how to use redux with a not-in-memory state properties? The thing I find hardest to wrap my head around is that redux relies on synchronous APIs, and so I cannot replace my state object with an async state object (unless I remove redux completely and replace it with my own, async implementation and connector).
I couldn't find an answer using Google, but if there are already good resources on the subject I would love to be pointed out as well.
UPDATE:
Question was answered but wanted to give a better explantation into how I implemented it, in case someone runs into it:
The main idea is to maintain change lists of both client and server using simply redux reducers, and use a connector to listen to these change lists to update IDB, and also to update the server with client changes:
When client makes changes, use reducers to update client change list.
When server sends updates, use reducers to update server change list.
A connector listens to store, and on state change updates IDB. Also maintain internal list of items that were modified.
When updating the server, use list of modified items to pull delta from IDB and send to server.
When accessing the data, use normal actions to pull from IDB (eg using redux-thunk)
The only caveat with this approach is that since the real state is stored in IDB, so we do lose some of the value of having one state object (and also harder to rewind/fast-forward state)
I think your first hunch is correct. If(!) you can't store everything in the store, you have to store less in the store. But I believe I can make that solution sound much better:
IndexedDB just becomes another endpoint, much like any server API you consume. When you fetch data from the server, you forward it to IndexedDB, from where your store is then populated. The store gets just what it needs and caches it as long as it doesn't get too big or stale.
It's really not different than, say, Facebook consuming their API. There's never all the data for a user in the store. References are implemented with IDs and these are loaded when required.
You can keep all your logic in redux. Just create actions as usual for user actions and data changes, get the data you need and process it. The interface is still completely defined by the user data because you always have the information in the store that is needed to GET TO the rest of it when needed. It's just somewhat condensed, i. e. you only save the total number of messages or the IDs of a mailbox until the user navigates to it.

How to correctly access properties of a Sencha model?

There appear to be a number of different ways how to access properties of a Sencha (Touch) model. However, I don't seem to be able to find proper documentation of which is the "correct" way of doing it.
Model creation
var model = Ext.create('MyApp.model.MyModel', {
name: value,
foo: bar,
...
})
Property access
model.get('name') or model.set('name', newValue)
model.data.name or model.data.name = newValue
model.raw.name seems to always return a string no matter what the data type in the model definition is?
Let's sort this all out:
get and set methods are the intended accessors for model field values.
model.data is the object that stores the client side model value, that is that have been converted from the data received from the server proxy using the fields configuration (type, convert method, etc.).
model.raw is the raw data that was received from the server proxy, before it was converted to client side application domain model values. You should avoid using it, or you will tie yourself to your proxy/server.
model['name']: as you've said, it doesn't work. Don't hope for it to come back (I don't even really understand that it worked at one point).
Now, which one should you use? Clearly, the last two ones are already out of the match.
The model.data object should give you the expected result in most cases (see bellow), and should give you a marginal performance gain other calling a function.
However, IMO you should always prefer to use the getters and setters, for two reasons.
First, it might happen that someone in your team (or you from the past) decides that the getter/setter is a good point to add some custom logic. In this case, bypassing the accessor by using the data object directly will also bypass this logic, and yield unpredictable result.
Secondly, getters and setters make it really easier to debug some situations, by making it easy to know from where modifications of the model values are coming. I mean, if one day you were to ask yourself "f**k, why is my model field value changing to this??". If all the code uses the getters, you'll just have to put a breakpoint in there, and you'll catch the culprit hand in bag. On the other hand, if the incriminated code uses the data object directly, you'll be stuck to do a whole project search for... You can't tell exactly what... data.name =? data['name'] =? name:? etc.
Now that I think about it, there is yet another reason. A deprecated one apparently. But the data object name used to be customizable using this persistenceProperty option. So, in some cases, the data object won't even be available, and code doing model.data.name instead of model[model.persistenceProperty].name would crash, plain and simple.
Short answer: use the accessors.

dojo.store.Observable, JSON REST and queryEngine

Does anybody know how to use the JsonRest store in dojo witn an Observable weapper, like the one in dojo.store.Observable?
What do I need, server side, to implement the store and make it work as an Observable one? What about the client side?
The documentation says http://dojotoolkit.org/reference-guide/1.7/dojo/store/Observable.html
If you are using a server side store like the JsonRest store, you will need to provide a queryEngine in order for the update objects to be properly included or excluded from queries. If a queryEngine is not available, observe listener will be called with an undefined index.
But, I have no idea what they mean. I have never created a store myself, and am not 100% familiar with queryEngine (to be honest, I find it a little confusing). Why is queryEngine needed? What does the doc mean by "undefined index"? And how do you write a queryEngine for a JsonRest store? Shouldn't I use some kind of web socket for an observable REST store, since other users might change the data as well?
Confused!
I realize this quesiton is a bit old, but here's some info for future reference. Since this is a multi-part question, I'll break it down into separate pieces:
1) Server-side Implementation of JsonRest
There's a pretty decent write up on implementing the server side of JsonRest Store. It shows exactly what headers JsonRest will generate and what content will be included in the rest. It helps form a mental model of how the JsonRest api is converted into HTTP.
2) Query Engine
Earlier in the same page, how query() works client side is explained. Basically, the query() function needs to be able to receive an object literal (ex: {title:'Learning Dojo',categoryid:5}) and return the objects in the store that match those conditions. "In the store" meaning already loaded into memory on the client, not on the server.
Depending on what you're trying to do, there's probably no need to write your own queryEngine anyway -- just use the built-in SimpleQueryEngine if you're building your own custom store. The engine just needs to be handed an object literal and it adds the whole dojo query() api for you.
3) Observables
My understanding is that the Observables monitor client side changes in the collection of objects (ex: adding or removing a result) or even within a specific object (ex: post 5 has changed title). It does NOT monitor changes that happen server-side. It simply provides a mechanism to notify other aspects of the client-side app that data changed so that all aspects of the page stay synchronized.
There's a whole write up on using Observables under the headings 'Collection Data Binding' and 'Object Data Binding: dojo/Stateful'.
4) Concurrency
There's two things you'd want to do in order to keep your client side data synchronized with the server side data: a) polling for changes from other users on the server, b) using transactions to send data to the server.
a) To poll for changes to the data, you'd want to have your object store track the active query in a variable. Then, use setTimeout() or setInterval() to run the query in the background again every so often. Make sure that widgets or other aspects of your application use Observables to monitor changes in the query result set(s) they depend on. That way, changes on the server by other users would automatically be reflected throughout your application.
b) Use transactions to combine actions that must be combined. Then, make sure the server sends back HTTP 200 Status codes (meaning 'It Worked!'). If the transactions returns a HTTP status in the 400s, then it didn't work for some reason, and you need to requery the data because something changed on the backend. For example, the record you want to update was deleted, so you can't update it. There's a write up on transactions as well under the heading 'Transactional'

Categories