How do I completely disable the max-execution-time for scripts in flex? The configurable max is 60 seconds, but I'm calling off to other interactive processes which will probably run much longer than that. Is there an easy way to disable the maximum script execution time across my entire application?
you can't. and probably, that's quite good. of course it's a pitty that you can't but when looking at the kind of things some people fabricate with the flash player, I am very happy.
For simplicity Adobe decided to promote a single threaded execution model that allows concurrent operations through asynchronous callbacks. sometimes this becomes anoying, verbous and even slower (performing a big calculation in a green thread simply takes longer than doing it directly). It's more of a political choice, so I guess the best you can do is live with it.
or you could explain what exactly you're up to, so I could propose a solution.
p.s.: there has been quite a lot of discussion going on about threads for background calculation. also, some people use seperate SWFs to perform calculation, or push it to pixel bender. also, you may wanna have a look at alchemy. it supports threading through relatively efficient continuation passing.
I have a long-running SOAP request that times-out with Error 1502. "Error #1502: A script has executed for longer than the default timeout period of 15 seconds."
I went to the right-click Properties dialog on the project in Flash Builder 4, then the Flex Compiler Options.
I set the Flex Compiler Options to "-locale en_US -default-script-limits 1000 60".
The locale was already there. It was the -default-script-limits that was cryptic to decipher from the compiler reference.
But I still got the fault with Error 1502 and 15 seconds. I even did a Project->Clean... command and tried again.
So, where is that 15 second timeout set? It turns out -- from some Googling and I'm not entirely sure -- that the Flex compiler accepts my setting, but the timeout message is fixed text with the 15 seconds message.
I also found that I could try: -default-script-limits 1000 65535. That didn't help either. This is from a posting on FlashDevelop.org 1
The bottom line for me is that I now need to page or otherwise divide up the information I am requesting in the SOAP call. My code still works fine for small requests.
Related
In a browser, I am trying to make a well-behaved background job like this:
function run() {
var system = new System();
setInterval(function() { system.step(); }, 0);
}
It doesn't matter what that System object is or what the step function does [except it needs to interact with the UI, in my case, update a canvas to run Conway's Game of Life in the background], the activity is performed slowly and I want it to run faster. But I already specified no wait time in the setInterval, and yet, when I check the profiling tool in Chrome it tells me the whole thing is 80% idle:
Is there a way to make it do less idle time and perform my job more quickly on a best effort basis? Or do I have to make my own infinite loop and then somehow yield back time to the event loop on a regular basis?
UPDATE: It was proposed to use requestIdleCallback, and doing that makes it actually worse. The activity is noticably slower, even if the profiling data isn't very obvious about it, but indeed the idle time has increased:
UPDATE: It was then proposed to use requestAnimationFrame, and I find that once again the slowness and idleness is the same as the requestIdleCallback method, and both run at about half the speed that I get from the standard setInterval.
PS: I have updated all the timings to be comparable, all three now timing about 10 seconds of the same code running. I had the suspicion that perhaps the recursive re-scheduling might be the cause for the greater slowness, but I ruled that out, as the recursive setTimeout call is about the same speed as the setInterval method, and both are about twice as fast as these new request*Callback methods.
I did find a viable solution for what I'm doing in practice, and I will provide my own answer later, but will wait for a moment longer.
OK, unless somebody comes with another answer this here would be my FINAL UPDATE: I have once again measured all 4 options and measured the elapsed time to complete a reasonable chunk of work. The results are here:
setTimeout - 31.056 s
setInterval - 23.424 s
requestIdleCallback - 68.149 s
requestAnimationFrame - 68.177 s
Which provides objective data to my impression above that the two new methods with request* will perform worse.
I also have my own practical solution which allows me to complete the same amount of work in 55 ms (0.055 s), i.e., > 500 times faster, and still be relatively well behaved. Will report on that in a while. But wonder what anybody else can figure out here?
I think this is really dependent on what exactly you are trying to achieve though.
For example, you could initialize your web-worker on loading the page and make it run the background-job, if need be, then communicate the progress or status of the job to the main thread of your browser. If you don't like the use of post-message for communication between the threads, consider user Comlink
Web worker
Comlink
However, if the background job you intend to do isn't something worth a web-worker. You could use the requestIdleCallback API. I think it fits perfectly with what you mentioned here since you can already make it recursive. You would not need a timer anymore and the browser can help you schedule the task in such a way that it doesn't affect the rendering of your page (by keeping everything with 60fps).
Something like =>
function run() {
// whatever you want to keep doing
requestIdleCallback(run)
}
You can read more about requestIdleCallback on MDN.
OK, I really am not trying to prevent others to get the bounty, but as you can see from the details I added to my question, none of these methods allow high rate execution of the callback.
In principle the setInterval is the most efficient way to do it, as we already do not need to re-schedule the next call back all the time. But it is a small difference only. Notably requestIdleCallback and requestAnimationFrame are the worst when you want to be rapidly called back.
So, what needs to be done is instead of executing only a tiny amount of work and then expect to be called back quickly, we need to batch up more work. Problem is we don't know exactly how much work we should batch up before it is too much. That can probably in most cases be figured out with trial and error.
Dynamically one might take timing probes to find out how quickly we are being called back again and preemptively exit the work (loop of some kind) when the time between the call-backs is expired.
I am working in real time trading application using Node.js(v0.12.4) and Socket.io(1.3.2). In that, I am facing some time delay nearly (100ms) when the response emitting from Node.js to GUI(Socket.Io).
I don't have a clue why the time delay is there while emitting data from Node.js to GUI (Socket.IO).
This happening in Production Site. And we tried to debug this in production server location also because of network latency. But same result.
Please anyone help me on this?
One huge thing to note before doing the following.
When calculating timing from back-end(server side) to front end
(client side) you need to run this on the same computer that uses the same
timing crystal.
quartz crystal-driven timing even on high quality motherboards deffer
from one another.
If you find no delay when calculating time delay from back-end(server side) to front end (client side) on the same pc then the delay you originally found was caused by either the network connection or the deference in the motherboards timing crystals. Which would eliminate Node.js and Socket.io as the cause of the time delay.
Basically you need to find out where the delay is happening before you can solve the problem.
What you need to do is find out what is causing the largest performance hit in your project. In order to do this you will need to isolate the time each process takes. You also need to measure the time delay from the initial retrieval of the data to the release of the data. Then measure the time delay of each function, method and process. Try to isolate the problem. You need to ask what is taking the most time to do?
In order to find out where your performance hit is coming from you need to do the following.
Find out how long it takes for your code to get the information it needs.
Find out how long each information manipulation process takes before it sends out the data i.e function/methods...
After the information is sent out find how long it takes to get the information to the client side and load.
Add all of the times up and make sure it is equal to your performance delay in order to insure you have all the data you require to isolate the performance leak.
Order every method, function, process… by its time delay most time consuming to least time consuming. When you find what processes are causing the largest delays you will then be able to fix the problem... Or at least explore tangible solutions...
Here are some tools you can use to measure the performance throughout your code.
First: Chrome has a really good tool that lets you see the performance of each piece of executed code.
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool?hl=en
https://developer.chrome.com/devtools/docs/timeline
Second: performance-now node package. You can use it for dev performance testing.
Third: Stack overflow post on measuring time/performance in js.
Fourth: you can use things like console.time()
https://nodejs.org/api/console.html
https://developer.mozilla.org/en-US/docs/Web/API/Console/time
https://developer.chrome.com/devtools/docs/console-api
I have found out that time delay where it is happening.
Once, I have emitted the data from Client Socket to Node, I will show the some alert message ("Data Processed"). The alert message taking time to render in GUI.
This alert message blocking the response data from the Node to Socket.
Thanks for your help Guys.
This is a duplicate question. It has been asked many times before, with dozens of answers, some of them rated very highly. Unfortunately, as far as I have been able to tell, every single one of those answers is a variant of "You don't, it's bad programming practice. Use setTimeout instead".
This is Not. An. Answer!
There are some use cases - rare but they exist - where you might want the entire page's execution to halt for a second or two, and I find it very frustrating that nobody seems interested in answering the actual question. (have a look at the comments here for some examples).
I am sure it's possible to halt javascript executing; for instance, if I use firebug to insert a breakpoint, then the execution stops when it hits that point. So, firebug can do it. Is there some way that the program can halt execution of the current thread until some timeout occurs?
Just some thoughts: How does firebug do it? Is there some browser-specific method? Is it possible to trigger a stop, without specifying a timeout to continue? Could I programmatically insert a breakpoint, or remove one? Could I get a closure representing the current thread to pass to setTimeout?
I don't have a specific use case in mind; I am just looking for advise from someone who knows the browser/javascript design better than me, as to how this can most effectively be done.
So far, I have come up with only one solution:
endtime=Date.now()+1000;
while(Date.now() < endtime)
$.ajax(window.location.origin,{'async':false});
This appears to work. The problem with it is, it makes hundreds of excess requests. I would replace the location.origin with something like mysite/sleep?delay=X and write a server side script to provide the delay, which would but it down to one, but the whole thing still seems really hacky. There must be a better way to do this! How does the jquery.ajax function manage it? Or is there a busy-wait buried in it somewhere?
The following do not answer the question and will be downvoted, just because I am sick of seeing pages of answers that completely ignore the question in their rush to rant on the evils of sleep:
Sleep is evil, and you should do anything it takes to avoid needing it.
Refactor your code so that you can use setTimeout to delay execution.
Busy-wait (because it doesn't stop execution for the duration of the sleep).
Refactor your code to use deferred/promise semantics.
You should never do this, it's a bad idea...
... because the browser has been, traditionally, single-threaded. Sleeping freezes the UI as well as the script.
However, now that we have web workers and the like, that's not the case. You probably don't need a sleep, but having a worker busy-wait won't freeze the UI. Depending on just how much you want to freeze a particular thread, I've seen people use:
endtime = Date.now()+1000;
while (Date.now() < endtime)
or, curiously (this was in an older but corporate-sponsored analytics library):
endtime = new Date().getTime() + 1000;
while (new Date().getTime() < endtime)
which is probably slower. If you're running a busy wait, that doesn't necessarily matter, and allocating objects probably just burns memory and GC time.
Code using promises or timeouts tends to be more modular, but harder to read (especially when you first learn async techniques). That's not an excuse for not using it, as there are definite advantages, but maybe you need everything to stay synchronous for some reason.
If you have a debugger running and want some chunk of code to pause itself (very useful when you have a bunch of nested callbacks), you can use:
function foo() {
do.someStuff();
debugger;
do.otherStuff();
}
The browser should pause execution at the debugger statement. The debugger can almost always pause execution, because it is in control of the VM running the code; it can just tell the VM to stop running, and that ought to happen. You can't get quite to that level from a script, but if you take source as text (perhaps from a require.js plugin), you can modify it on the fly to include debugger statements, thus "programmatically inserting breakpoints." Bear in mind that they will only take effect when the debugger is already open, though.
To capture the state of a "thread" and persist it for later use, you may want to look into some of the more complicated functional programming concepts, particularly monads. These allow you to wrap a start value in a chain of functions, which modify it as they go, but always in the same way. You could either keep simple state (in some object), or record and reproduce everything the "thread" may have done by wrapping functions in functions. There will be performance implications, but you can pick up the last function later and call it, and you should be able to reproduce everything the thread may have done.
Those are all fairly complicated and specific-use solutions to avoid just deferring things idiomatically, but if you hypothetically need them, they could be useful.
No, it is not possible to implement a sleep in javascript in the traditional sense, as it is a single-threaded event based model. The act of sleeping this thread will lock up the browser it is running in and the user is presented with a message either telling them the browser has stopped responding (IE) or allowing them to abort the currently running code (Firefox).
I have a simple web-page (PHP, JS and HTML) that is displayed to illustrate that a computation is in process. This computation is triggered by a pure JavaScript AJAX-request of a PHP-script doing the actual computations.
For details, please see
here
What the actual computation is, does not play a role, so for simplicity, it is just a sleep()-command.
When I execute the same code locally (browser calls website under localhost: linux, apache, php-mod) it works fine, independant of the sleep-time.
However, when I let it run on a different machine (not localhost, but also Linux, apache, php-mod), the PHP-script does run through (results are created), but the AJAX-request does not get any response, so there is no "onreadystatechange" if the sleep-time is >= 900 seconds. When sleep-time < 900 seconds it also works nicely and the AJAX-request is correctly terminated (readyState==4 and status==200).
The apache and php-configuration are more or less default and I verified the crucial options there already (max_execution_time etc.) but none seems to be valid here as they are either shorter (<1 min.) or bigger, e.g. for the garbage-collector (24 min.).
So I am absolutely confused what may cause this. I am thinking it might be network-related, although I didn't find any appropriate option in my router or so.
Also no error is reported in the apache-logs or in PHP (error loggin to file).
Letting the JavaScript with the AJAX-request display the request.status upon successfull return, surprisingly when I hit "Esc" in the browser window after the sleep is over, I also get the status "200" displayed but not automatically as it should do it.
In any case, I am hoping that you may have an idea how to circumvent this problem?
Maybe some dummy-communication between client and server every 10 minutes or so might do the trick, but I don't have an idea how to best do something like this, especially letting this be transparent to the user and not interfering with the actual work of doing the computations/sleep.
Best,
Shadow
P.S. The post that I am referencing is written by me, but seems to tramsit the idea that it might be related to some config-option, which seems not to be the case. This is why I am writing this post here, basically asking for a way to circumvent such an issue regardless of it's origin.
I'm from the other post you mentioned!
Now that I know more about what you are trying to do: monitor a possibly long running server job, I can recommend something which should turn out a lot better, its not a direct answer to your question, but its a design consideration which includes by its nature a more suitable solution.
Basically, unlink the actions of "starting" the server side task, from monitoring its progress.
execute.php kicks off your background job on the server, and immediately returns.
Another script/URL (lets call it status.php) is available to check the progress of the task execute.php is performing.
When status.php is requested, it won't return until it has something to report, UNLESS 30 seconds (or some other fixed) amount of time passes, at which point it returns a value that you know means "check again". Do this in a loop, and you can be notified immediately of when the background task has completed.
More details on an approach similar to this: http://billhiggins.us/blog/2011/04/27/resty-long-ops
I hope this help give you some design ideas to address your problem!
I'm developing a SCORM compliant LMS, and having some problems with Captivate generated contents.
Basically, the behavior is: If you see a SCO (captivate generated content) with for example 15 slides and 1 question in each slide quickly, my lms is not tracking all the 15 question, only the first 3 or 4. If you wait a long time at the end, or if you take the content slow, it works fine.
After a lot of google searches, and debugging and tracing, finally, I found two main issues:
1) Captivate - SCORM API communication is asynchronous (is the same than flash - javascript communication). So, when the user see the content quickly, the function calls get more and more dealayed, and at the end, maybe the user is answering question 15, and the content is sending question 4 information. I cannot change the Flash or JS-Flash interface, because this is provided by Captivate.
There is a way to make this sync?? I mean, to force the flash wait some way?
2) The functions are taking longer each time they are called, for example, setValue takes 7 milliseconds the first time and 200 the last time is called.
To understand this problem, here is a little background:
Captivate contents (all contents really but more captivate) calls a specific function many times, the SetValue function, one of the SCORM API functions. This function takes two parameters (fieldName, value) the firstone is the name of the field to be set, and the second the new value. In my implementation, this function first validate the value using a regular expression, and then set the value in an object.
Ok, I can add a lot more info, but I don't know what is really important, I'm not hoping you fix my code without seeing it, but I'm out of ideas, and need new opinions, ideas, directions.... maybe that sombody ask the right question... help :)
Thanks
When publishing for SCORM, Captivate does not use synchronous communication methods.* Depending on the browser, Captivate uses either FSCommand or the old-school getURL method to communicate with the HTML file; the HTML file then uses JavaScript to relay the data to the LMS via the SCORM API.
The response (if any) is relayed from JavaScript to either FSCommand or a proxy SWF (for getURL), which is then monitored internally in Captivate via a callback function. This callback function uses timers, and that's probably where your problem lies.
If you're setting g_intAPIType to 0, you're forcing the browser to use FSCommand, which isn't supported in all browsers and operating systems. Setting g_intAPIType to 1 means you're forcing the browser to use getURL, which is cross-browser but has a few drawbacks (including lots of clicking sounds).
In both cases, the data is sent via an internal queue script, which uses the waitForResponse callback function.
The performance problems you're encountering are likely due to the queuing, and the asynchronous communication compounds the problem because of timers attached to waitForResponse. Changing g_intAPIType will probably only have a minor effect on your performance issues, though using getURL (g_intAPIType=1) may help improve consistency from browser to browser.
Regardless of the g_intAPIType settings, you cannot prevent the internal tracking mechanism from using the asynchronous waitForResponse function, so there is no way to stop Captivate from using timers when getting/setting data; over a period of time you will probably start to notice longer and longer delays like the ones you described, esp. if you're making a lot of calls to the LMS.
(* Small exception: I've been informed Captivate 4 and 5 use ExternalInterface if the project is built in AS3 and is published for SCORM 2004, but it appears the queue and waitForResponse timers are still used, basically treating ExternalInterface like the asynchronous methods listed above.)
Some Options:
You could change how you are doing the questions. Instead of 1 per frame put all the questions on 1 frame.
Otherwise, you will need to do some JavaScript magic in your SCORM Player JavaScript. I would start with minimizing the JS code with a tool like JSMin.
Then try to cache the JS files so they are only loaded once. I suspect that the files are being called over and over with each frame.
"There is a way to make this sync?? I mean, to force the flash wait some way?"
Apparently, the problem is this one :
"Captivate is the only SCO that calls SCORM JavaScript functions asynchronously. Firefox is the only browser that does not force synchronous communications between the SCO and the supporting JavaScript. When a Captivate SCO, running on Firefox, submits a status update to one of the JS functions, Captivate does not wait for a success or fail response before submitting the next status update. Since Captivate is quite verbose in its communications and JavaScript is not multithreaded, quiz status submissions can stack up and overwrite each other. This can cause a loss of data - especially for longer quizzes. [...]
If you'd like to see the asynchronous problem with any other LMS, take a long Captivate quiz using Firefox and answer the questions very quickly. Some of the questions near the end will get dropped.. " (interzoic.com forum)
And maybe a solution :
"The slow issue is resolved when I force the g_intAPIType to 0 (into the
.htm file), so it force Captivate to communicate as if it was into IE."
In captivate, while publishing a scorm you will see option "Send tracking data at the end",
Use this option, it will resolve your problem.