I am trying to load my playlist onto my website from Spotify by using their API and requesting data by lines of AJAX code but that data would be loaded every single time if I refresh the page, which lower the speed and UX. Is there any solution of the problem?
P.S: a more complex problem: I was going to request the data once after period of time and modify the code of the HTML files stored on the server by a server-side Python application I wrote if any changes of the playlist occurred (example: I added/removed songs to or from my playlist via Spotify), for the purpose of not only freeing the browser from requesting datas but also getting the chances to sort that data by the date when publishing, rating or the artists for a better client-side layout. So my questions are:
Is it secure to let the server-side applications to modify the HTML files (which means really saving the changes to the server-side HTML files) automatically because it is a very long process I don't want to either leaving those annoying works to the client-side which bothers the users or updating my website manually which bothers me.
Are those things even possible?
Try using html5 localStorage to store your playlist on the client's browser along with a timestamp. Once the timestamp is a certain period old, any loaded page can reload your playlist if necessary.
Related
I am fairly new to javascript, I do know basics. I am looking to build my own (from scratch) java script library just like google analytics.js that will track user behavior on websites. Basically I'm looking to collect data like
Click through data
Dwell time
Page hits etc..
I spent lot of time trying to find website/tutorials to get me started on this but I keep ending up on google analytics.js or some private tools.
What I am looking for :
Is there any good starting point/resource/website which can help me build this js library
Are there reference for archetecture of end to end system including back-end?
Any open-source library that I can directly use?
Some things I already looked into
Chaoming build your own analytics tool
Splunk BYO analytics
At it's most basic, the architecture of such an application would only require a client, server, and database.
You can use basic javascript functions to record specific user actions on the frontend and then push them to your server. To identify your users you can set a cookie with a unique id. Then, everytime you send data to your server, you will get the specific user request as well so you can keep track of their actions. (Be careful of privacy laws first though).
For page hits, simply send a response to the server everytime someone opens your site - so call this function as soon as your Javascript loads. On the server, send a request to increment the appropriate value in your database.
For user dwell time, write a function that records the date when the user first hits your site and then count how long they stay there. Push your data to the server every so often and save updates to the user record by adding the new time spent to the current time spent. You could also watch for when a user is about to exit out of the site and then send the data all at once that way - although this method is more fragile.
For clicks and hovers, set up onclick and mouseover event handlers on your links or whatever elements you want to track. Then push the url of the link they clicked or whatever data you want - like "Clicked navbar after 200 seconds on site and after hovering over logo`.
If you want suggestions on specific technologies, then I suggest Node.js for your server side code and MongoDB for your database. There are many tutorials out there on how to use these technologies together. Look up javascript events for a list of the different things you can watch for on the frontend.
These are the building blocks you need. Now you just have to work on defining the data you want and using these technologies to get it.
In the application I am writing, a user captures information about a person via an online form. When they have completed the form they save their work, repeating this process several times in a session. When they hit 'Save and End Session' they are returned a list of the several person instances they have just saved, all data being saved to a server.
I wish to replicate this functionality in an offline app. Using HTML5 I understand how to cache pages, and how store the JSON form data in localStorage using raw Javascript (or perhaps Angular.js cache).
But is it possible to dynamically update cached webpages with cached data while offline? how, for example, can I write the the cached form data to a cached copy of the list page, updating that page with the data just produced, all during the offline session?
I cannot find an answer to this one. All suggestions are much appreciated!
If I understood this correctly, you want to dynamically update the html view while offline.
If you are using Angular, this is pretty simple.
You just have to cache also the JS controller, not only the html file (set it in the cache.manifest). The page will have the same functionality as the online app then. But if you want to send the stored offline data back to the server when offline, you can write a simple code that will:
Save the parameter in localStorage, which will mark if the data was saved while running online/offline app (you can recognize onine/offline by sending AJAX request to an existing part of the app, which is not available offline (so not cached one))
When app runs then in online mode, it will collect all the data stored offline and send it to the server
I'm trying to build a single page web app using Backbone. the app looks and behaves like a mobile app running on a tablet.
The web app is built to help event organizers manage their lists of people attending their events, and this includes the ability to search and filter those lists of attendees.
I load all attendees list when the user opens the attendees screen. and whenever the user starts to search or filter the attendees, the operation happens on the client side.
This way always works perfectly when the event has about ~400 attendees or less, but when the number of attendees gets bigger than that (~1000), the initial download time takes longer (makes sense) .. but after all data is loaded, searching and filtering is still fast relatively.
I originally decided to go with the option of fully loading all the data each time the app is loaded; to do all search operations on the client side and save my servers the headache and make search results show up faster to the user.
I don't know if this is the best way to build a web/mobile app that processes a lot data or not.
I wish there's a known pattern for dealing with these kinds of apps.
In my opinion your approach to process the data on the client side makes sense.
But what do you mean with "fully loading all the data each time the app is loaded"?
You could load the data only once at the beginning and then work with this data throughout the app lifecycle without reloading this data every time.
What you also could do is store the data which you have initially fetched to HTML5 localstorage. Then you only have to refetch the data from the server if something changed. This should reduce your startup time.
I'm developing kind of image/profile search application, that is based almost exclusively on AJAX. Main page basically displays profile images and allows user to filter/search and paginate through them.
Pagination works when user scrolls, so the interface has to be very very fast. There will be only 6 (maybe 9, but definitely not more) images displayed on the main page, so users will scroll a lot. I'm currently using very simple JS cache to store results of all the requests in case user decides to go back ... in that case, I simply pull everything out of the cache instead of querying the server.
Client cache
One option that I thought of is to pre-load say 10 pages in front and store them in the cache.
But my biggest issue is filtering/searching, since that completely changes the type of query that goes to the server. My filters aren't very complex, only around 6-7 string/number/enum attributes.
Now if I wanted to do all the filtering in the cache, I would have to duplicate all the search logic and fetch all the data from the server (not just the data I'm displaying), so I could filter the results on client side.
Here raises a question, should I make the cache somehow persistent? Store it into a cookie maybe?
Server cache?
One suggestion might be to use memcached on the server and just store everything there. I'm definiely going to cache away all the results I can, but that doesn't save the server from handling loads and loads of AJAX requests.
I'm developing this application on Rails 3, and even though I love it, I wouldn't say it's the fastest thing in the world. Another option that this gives me is to create separate Rack/Sinatra application to handle only the AJAX requests. By this I mean requests from the main query, not all AJAX.
S3 for images?
Big part of this application are images, even though they're mostly small thumbnails (unless user wants to display it bigger).
At the moment, I don't have problems with bandwidth. My VPS host provides me with 200GB, which should be more than enough (I hope). The problem is loading speed. Would it help if I uploaded all the images to S3 and load them from there, or is this worth doing only for larger files? I'm going to load a lot of 100x150px images, which are generally under 50kB.
Have you looked at SlickGrid. It has an interesting idea of only building the list as users scroll down but then removing the list as the users scroll out of that range.
Sorry for the somewhat confusing title. Not sure really how to title this. My situation is this- I have an academic simulation tool, that I in the process of developing a web front-end for. While the C++-based simulator is computationally quite efficient (several hundredths to a tenth of a second runtime) for small systems, it can generate a significant (in web app terms) amount of data (~4-6mb).
Currently the setup is as follows-
User accesses index.html file. This page on the left side has an interactive form where the user can input simulation parameters. On the right side is a representation of the system they are creating, along with some greyed out tabs for various plots of the simulation data.
User clicks "Run simulation." This submits the requested sim parameters to a runSimulation.php file via an AJAX call. runSimulation.php creates an input file based on the submitted data, then runs the simulator using this input file. The simulator spits out 4-6mb of data in various output files.
Once the simulation is done running, the response to the browser is another javascript function which calls a file returnData.php. This php script packages the data in the output files as JSON data, returns the JSON data to the browser, then deletes the data files.
This response data is then fed to a few plotting objects in the browser's javascript, and the plot tabs become active. The user can then open and interact with the plotted data.
This setup is working OK, however I am running into two issues:
The return data is slow- 4-6mb of data coming back can take a while to load. (That data is being gzipped, which reduces its side considerably, but it still can take 20+ seconds on a slower connection)
The next goal is to allow the user to plot multiple simulation runs so that they can compare the results.
My thought is that I might want to keep the data files on the server, while the users session is active. This would enable the ability to only load up the data for the plot the user wants to view (and perhaps loading other data in the background as they view the results of the current plot). For the multiple runs, I can have multiple data sets sitting on the server, ready for the user to download if/when they are needed.
However, I have a big issue with this line of thinking- how do I recognize (in php) that the user has left the server, and delete the data? I don't want the users to take over the drive space on the machine. Any thoughts on best practices for this kind of web app?
For problem #1, you don't really have any options. You are already Gzip'ing the data, and using JSON, which is a relatively lightweight format. 4~6 MB of data is indeed a lot. BTW if you think PHP is taking too long to generate the data, you can use your C++ program to generate the data and serve it using PHP. You can use exec() to do that.
However, I am not sure how your simulations work, but Javascript is a Turing-complete language, so you could possibly generate some/most/all of this data on the client side (whatever makes more sense). In this case, you would save lots of bandwidth and decrease loading times significantly - but mind that JS can be really slow.
For problem #2, if you leave data on the server you'll need to keep track of active sessions (ie: when was the last time the user interacted with the server), and set a timeout that makes sense for your application. After the timeout, you can delete the data.
To keep track of interaction, you can use JS to check if a user is active (by sending heartbeats or something like that).