I have a radio station at Tunein.com. In order to update album art and artist information, I need to send the following
# Update the song now playing on a station
GET http://air.radiotime.com/Playing.ashx?partnerId=<id>&partnerKey=<key>&id=<stationid>&title=Bad+Romance&artist=Lady+Gaga
The only way I can think to do this would be by setting up a PHP/JS page that updates the &title and &artist part of the URL and sends it off if there is a change. But I'd have to execute it every second, or at least every few seconds, using cron.
Are there any other more efficient ways this could be done?
Thank you for your help.
None of the code in this answer was tested. Use at your own risk.
Since you do not control the third-party API and the API is not capable of pushing information to you when it's available (an ideal situation), your only option is to poll the API at some interval to look for changes and to make updates as necessary. (Be sure the API provider is okay with such an approach as it might violate terms of use designed to prevent system abuse.)
You need some sort of long-running process that will execute at a given interval.
You mentioned cron calling a PHP script which is one option (here cron is the long-running process). Cron is very stable and would be a good choice. I believe though that cron has a minimum interval of 1 minute. I'm sure there are similar tools out there, but those might require you to have full control over your server.
You could also make a PHP script the long-running process with something like this:
while(true){
doUpdates(); # Call the API, make updates, etc
sleep(5); # Wait 5 seconds
}
If you do go down the PHP route, error handling of some sort will be a must:
while(true){
try{
doUpdates();
} catch (Exception $e) {
# manage the error
}
sleep(5);
}
Personal Advice
Using PHP as a daemon is possible but it is not as well tested as the typical use of PHP. If this task was given to me, I'd write a server/application in JavaScript using Node.js. I would prefer Node because it is designed to work as a long running process and intervals/events are a key part of JavaScript and I would be more confident in that working well than PHP for this specific task.
Related
I am using jimp (https://www.npmjs.com/package/jimp) in meteor JS to generate an image server side. In other words I am 'calculating' the pixels of the image using a recursive algorithm. The algorithm takes quite some time to complete.
The issue I am having is that this seems to completely block the meteor server. Users trying to visit the webpage while an image is being generated are forced to wait. The website is therefore not rendered at all.
Is there any (meteor) way to run the heavy recursive algorithm in a thread or something so that it does not block the entire website?
Node (and consequently meteor) runs in a single process which blocks on CPU activity. In short, node works really well when you are IO-bound, but as soon as you do anything that's compute-bound you need another approach.
As was suggested in the comments above, you'll need to offload this CPU-intensive activity to another process which could live on the same server (if you have multiple cores) or a different server.
We have a similar problem at Edthena were we need to transcode a subset of our video files. For now I decided to use a meteor-based solution, because it was easy to set up. Here's what we did:
When new transcode jobs need to happen, we insert a "video job" document in to the database.
On a separate server (we max out the full CPU when transcoding), we have an app which calls observe like this:
Meteor.startup(function () {
// Listen for non-failed transcode jobs in creation order. Use a limit of 1 to
// prevent multiple jobs of this type from running concurrently.
var selector = {
type: 'transcode',
state: { $ne: 'failed' },
};
var options = {
sort: { createdAt: 1 }, limit: 1,
};
VideoJobs.find(selector, options).observe({
added: function (videoJob) {
transcode(videoJob);
}, });
});
As the comments indicate this allows only one job to be called at a time, which may or may not be what you want. This has the further limitation that you can only run it on one app instance (multiple instances calling observe would simultaneously complete the job). So it's a pretty simplistic job queue, but it may work for your purposes for a while.
As you scale, you could use a more robust mechanism for dequeuing and processing the tasks like Amazon's sqs service. You can also explore other meteor-based solutions like job-collection.
I believe you're looking for Meteor.defer(yourFunction).
Relevant Kadira article: https://kadira.io/academy/meteor-performance-101/content/make-your-app-faster
Thanks for the comments and answers! It seems to be working now. What I did is what David suggested. I am running a meteor app on the same server. This app deals with the generating of the images. However, this resulted in the app still eating away all the processing power.
As a result of this I set a slightly lower priority on the generating algorithm with the renice command on the PID. (https://www.nixtutor.com/linux/changing-priority-on-linux-processes/) This works! Any time a user logs into the website the other (client) meteor application gains priority over the generating algorithm. Absolutely no delay at all anymore now.
The only issue I am having now is that whenever the server restarts I somehow have to rerun or run the (re)nice command.
Since I am using meteor up for deployment both apps run the same user and the same 'command': node main.js. I am currently trying to figure out how to run the nice command within the startup script of meteor up. (located at /etc/init/.conf)
Sometimes I'm having issues with firebase when the user is on a slow mobile connection. When the user saves an entry to firebase I actually have to write to 3 different locations. Sometimes, the first one works, but if the connection is slow the 2nd and 3rd may fail.
This leaves me with entries in the first location that I constantly need to clean up.
Is there a way to help prevent this from happening?
var newTikiID = ref.child("tikis").push(tiki, function(error){
if(!error){
console.log("new tiki created")
var tikiID = newTikiID.key()
saveToUser(tikiID)
saveToGeoFire(tikiID, tiki.tikiAddress)
} else {
console.log("an error occurred during tiki save")
}
});
There is no Firebase method to write to multiple paths at once. Some future tools planned by the team (e.g. Triggers) may resolve this in the future.
This topic has been explored before and the firebase-multi-write README contains a lot of discussion on the topic. The repo also has a partial solution to client-only atomic writes. However, there is no perfect solution without a server process.
It's important to evaluate your use case and see if this really matters. If the second and third writes failed to write to a geo query, chances are, there's really no consequence. Most likely, it's essentially the same as if the first write had failed, or if all writes had failed; it won't appear in searches by geo location. Thus, the complexity of resolving this issue is probably a time sink.
Of course, it does cost a few bytes of storage. If we're working with millions of records, that may matter. A simple solution for this scenario would be to run and audit report that detects broken links between the data and geofire tables and cleans up old data.
If an atomic operation is really necessary, such as gaming mechanics where fairness or cheating could be an issue, or where integrity is lost by having partial results, there are a couple options:
1) Master Record approach
Pick a master path (the one that must exist) and use security rules to ensure other records cannot be written, unless the master path exists.
".write": "root.child('maste_path').child(newData.child('master_record_id')).exists()"
2) Server-side script approach
Instead of writing the paths separately, use a queue strategy.
Create an single event by writing a single event to a queue
Have a server-side process monitor the queue and process events
The server-side process does the multiple writes and ensures they
all succeed
If any fail, the server-side process handles
rollbacks or retries
By using the server-side queue, you remove the risk of a client going offline between writes. The server can safely survive restarts and retry events or failures when using the queue model.
I have had the same problem and I ended up choosing to use condition Conditional Request with the Firebase REST API in order to write data transactionally. See my question and answer. Firebase: How to update multiple nodes transactionally? Swift 3 .
If you need to write concurrently (but not transactionally) to several paths, you can do that now as Firebase supports multi-path updates. https://firebase.google.com/docs/database/rest/save-data
https://firebase.googleblog.com/2015/09/introducing-multi-location-updates-and_86.html
I've got a question about data flow that is summarized best by the image below:
I've got the data path from the UI (WaveMaker) down to the hardware working perfectly. The question I have is whether I'm missing something in the connection from the Java Service to Wavemaker.
I'm trying to provide information back to Wavemaker from the HW. The specifics of shared memory and semaphore signaling are worked out already. Where I'm running into a problem is how to get the data from the Java Service back to WaveMaker, when it hasn't specifically requested it. My plan was to generate events when the Java Service returned, but another engineer here insists that it won't work, since there's no direct call from Wavemaker and we don't want to poll.
What I proposed was to call the function after the page loaded, allow the blocking to occur at the .so level, as shown below, and then handle the return string when the call returned. We would then call the function again. That has the serious flaw of blocking out interaction with the user interface.
Another option put forth would be to use a hidden control, somehow pass it into Java, and invoke an event on it from Java, which could then be made to execute a script to update the UI with the HW response. That keeps the option of using threads alive, and possibly resolves the issue. Is there some more elementary way of getting information from Java->JavaScript->UI without it having been asked for?
I have my own stripped down script that currently makes uses of XHR or script tags depending on browser support. These requests ultimately return some JSON. My problem is now elements of this object now need to be updated by the server while on the client i.e. i need to implement some kinda of long pull/comet soln.
Google seems to come up with lots of solns using various frameworks such as JQuery etc.. to do this on the client side. However this is not an option for me.
I was wondering if you guys had any suggestions on how I could extend my existing approach to allow comet style updates from the server. One of the standard approaches seems to be using hidden iframes. This is a no go as the app server that is providing me with my json data is different to the actual webserver.
jQuery simply wraps around the XHR/XMLHTTPRequest object.
First of, you need a small function to return the object in a cross-browser manner. This is done in 3 lines or less, not too difficult. That said, the are great snippets for fixing different browser issues, such as memory leaks. I highly suggest you use this one. These of course span more than 3 lines (unless minified). But in either case, if you want repeated connections, you just can't do this from scratch.
Next, on the server side, assuming you're on PHP:
set_time_limit(300); // force connection only after 5 minutes
ignore_user_abort(false); // if the connection ends, terminate immediately
while(true){
if(some_condition()){
echo some_response();
break; // break the loop
}
sleep(2); // wait for a second or two
}
Client side, just repeat the query whenever the connection ends. At this point, also handle the output.
Clients-side example:
function poll(){
jQuery.get('http://somesite.com/poll.php',function(data){
alert('Just received: '+data);
poll(); // repeat poll
});
}
poll(); // begin polling
I had to develop a newsletter manager with JS + PHP + MYSQL and I would like to know a few things on browser timing out the JS functions. If I'm running a recursive function that delays a call to itself (while PHP returns a list of email), how can I be sure that the browser won't timeout this JS function ?
I'm asking this, because I remember using a similar newsletter manager, that while doing the ajax requests, after a few calls, it stopped without any apparent reason. I know JS is not meant for this, and I should use Crontab on server, but, I can't assume the users server handles cron, so I had to stick with JS + php.
PS - This didn't happened on this app yet, I'm just trying to prevent the worse of the scenarios (since I've tested a newsletter manager, that worked the same as this one I'm developing). Since my dummy email list is small and the delays between calls are also small, this works just fine, but let's imagine a 1,000 contact list, with a delay between sends of 120 seconds: Sending 30 emails for each 2 minutes.
By the way, why this ? Well, many hosting servers has a limit on emails sent per day or hour and this helps preventing violating that policy.
from the mootools standpoint, there are several possible solutions here.
request.periodical - http://mootools.net/docs/more/Request/Request.Periodical
has plenty of options that allow for handling batches of jobs, look at it like a more complex .periodical (setInterval) that understands async nature of the result and can compensate for lag etc. I think it can literally do what you set in your requirements out of the box, all you need is an oncomplete callback that clears up the done from your pending array (for eg).
request.queue - http://mootools.net/docs/more/Request/Request.Queue
basically, setup all your requests to handle the chunks of data and pass them on to Request.Queue to handle sequentially. Probably less sophisticated from the point of view of sending rate control.
How about a meta refresh. That will not cause a timeout in your javascript function. You Just reload your page after a specific time and then send the next emails out. Adding a parameter to the URL you can find out which "round" you are on.
Can this do the job for you?
You need to use setTimeOut. The code needs to yield control to the UI thread and let the browser become responsive to avoid the script from being stopped.
Read this post by Nick Z.
http://www.nczonline.net/blog/2009/01/13/speed-up-your-javascript-part-1/
There is also something the W3C Spec about this called "Efficient Script Yielding" I'm not sure how far along it is or if any browsers support it.
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html
You could also try HTML5 Web Workers.