I am using fullcalendar and I really like it!
I am developing a simple application which needs to show a calendar able to rendere lots of events every day/week but I am facing an issue, since the pure web HTML/JS application runs in an embedded device with a chromium-based browser when I inject the events (let's say 1000 in a week in the week view) it tooks a lot of time, doesn't matter if I do it by .addEvent or .addEventSource, I also tried to put them inside a .batchrendering callback but no luck..
Then I tried to load them using the "events (as a function)" way but still it is slow.
The events data is retrieved by my AJAX callbacks in a centralized way thus I cannot use "events (as a JSON feed)" method.
The strange thing that I noticed is that the successCallback or the .addEventSource call returns after some reasonable time but then, the system, remains blocked for a lot of time, if I try to inspect the HTML I can see that the generated elements are destroyed and created again for a lot of time keeping my cpu at 100%
Is it a known issue or am I doing something very wrong?
Which should be the fastest way to insert the elements?
Note: I am using the fullcalendar v4, could be v5 a more powerful alternative?
Thanks :)
Related
Apologies if this sounds strange, but I'll try to describe what's happening as best I can. We were provided a small service that provides a single form UI that we capture the results of via an event. We've integrated this into two applications and we're seeing some different behavior when we load the UI in a modal window via an iframe. When integrated into one UI it loads pretty quickly. However, in the other UI it takes several seconds to load. Now, the only difference I could find is that setTimeout is being triggered sever seconds after the model is created. I discovered this using the Firefox development tools in the Performance tab.
Now, I believe that the UI for this form is built in a non-recent version of Angular (AngularJS?) based on some Google searches using strings that I could see in a minimized polyfill.xxxx.js file. However, I can't understand the code that was minimized and I have no version information to help me get back to a version that I can try to read and understand.
I did testing using the Performance API before the iframe is created in case the issue was something in my code, but the tested code is finished in < 100ms, so that didn't appear to be the issue. It's not a network issue as the requests occur pretty quickly. Also, both applications are referencing the same instance, so the only difference is the app that it's integrated into.
So, my primary question is what could be causing Angular (AngularJs) to set a timeout on page load? My secondary question is what advice is there for trying to debug this? I don't use Angular at all, so I'm not even sure where to begin outside of what I've already tried. The only custom app code I see looks to be Angular configuration/properties, so no JavaScript to debug.
The best advice with setTimeout() in such a situation is to not use any setTimeout().
I ran into same situation not only angular most of the framework treat setTimeout() a bit differently.
What I mean setTimeout() in a plain JS app and angularJS app and Angular App will differ the time interval.
setTimeout() is set to execute after a certain time, but that's is not guaranteed by the thread.
Some time angular completing change detection and watcher life cycle, all mingle with setTimeout() and you will end up with strange behavior.
So better not to use that unless you are sure it's not gonna mingle with other running things.
Please share the code snippet if possible
I need to invent a method to synchronize multiple devices that are connected to a Website.
Lets say I have a Tablet and a Laptop and change some elements of the dom on the laptop. Then the
website on the Tablet should change accordingly.
At first I need a way to monitor the dom changes. I read a lot in forums and have also tried to work with browser-synch (https://github.com/shakyShane/browser-sync) but it does not monitor all events. However it may happen that the websites are not in synch.
I have also tried to use the new MutationServer techniques. But I am not sure how exactly to update the websites on the other devices:
In case a new node has been added to the DOM I first have to determine its position inside the tree; then send all the informations to the other clients (i guess via nodejs and socketio). The clients have to put the newly created node in the right position of their tree.
In case a node has been edited or removed I have to react on it as well...
So my question is: Do you have any ideas or literature to solve my problem?
I just need some good hints to start because I am not really sure which method leads me to the "best" solution. I want to invent an efficient solution as well.
Sure, I could monitor the DOM by checking x times in a second whether changes occured. But this method is not really efficient especially not for mobile devices.
Hope you guys can help me and lead me to the right direction.
Best regards.
note: There is nothing ready and this is not so simple to do.
Capture all DOM maybe this will help you: AcID DOM Inspector
Capture "event Listeners" use Visual Event 2
Avoid capture frames (or use them) because security issues (cross-origin) may occur.
In my view it is not necessary to capture the entire document, it would only be necessary to capture inserts the "BODY" and "HEAD"
And avoid complex structures, perhaps the events (eventslistiners) need not be captured, only you would need to have a library that adds already existing events. Unless the events are created by customers, in which case you will have to capture them.
To transmit them you have to use Ajax and shared with DataBase (SQLServer, Mysql, MongoDB, etc).
From my point of view:
If you want to create is an application just to share data, for example someone posted a "new photo" or a "new comment", would be better to make the transmission of which was inserted in the "database" for "JSON" and clients read the "JSON" your main "javascript library" would dynamically generate the contents.
Thus would transmit much less making things faster and easier to implement process.
I'm making a HTML5 page (game) which uses lot of popups and all kind of widgets appearing and dissapearing in the same page.
To implement this I could
Have all the popups and widgets listed in the page, invisible (like lot of examples I saw), and keep toggling only visibility.
Add and remove dynamically, using Javascript. I could put each popup as HTML fragment in a separate file (?).
The seconds is "modular" and I like the fact that I have no elements in the page which I'm not acutally using. But I don't know about performance (load HTML each time, DOM insertiong, etc.).
Is there a prefered/standard way to do this?
If we are talking about loading HTML from server, then obviously this won't be efficient.
I don't know what kind of game you are writing but I don't think there will be any visible difference in performance (except for loading data from server) unless you create like thousands popups per second (I doubt it). Let's be honest - your game isn't using like 4GB of memory. :) And if it is, then you're probably doing something wrong. And I do not think there is any standard way. It's more like how you feel it. :)
For example I always try to load every possible data from server with one request and store it on client-side, because most problems with performance is actually related to client-server communication. I also like the DOM to be clean, so in most cases I hold the (hidden) data in JavaScript, except for forms' hidden fields.
On the other hand, if I have for example blog with discussion and I load some additional data (for example user data, which is supposed to appear as a popup after click on the user's name) I tend to store it in DOM elements because it is easier (at least for me) to control it (I'm talking about jQuery and jQuery UI).
Note that it is possible that recreating popups may lead to memory leaks, but it is highly unlikely if you use some popular library like (for example) jQuery UI.
I use JavaScript for rendering 20 tables of 100 rows each.
The data for each table is provided by controller as JSON.
Each table is split into section that have "totals" and have some other JavaScript logic code. Some totals are outside of the table itself.
As a result JavaScript blocks browser for a couple of seconds (especially in IE6) :(
I was consideting to use http://code.google.com/p/jsworker/,
however Google Gears Workers (I guess workers in general) will not allow me to make changes to DOM at the worker code, and also it seems to me that I can not use jQuery inside jsworker worker code. (Maybe I am wrong here?).
This issue seems to be fundamental to the JavaScript coding practice, can you share with me your thoughts how to approach it?
Workers can only communicate with the page's execution by passing messages. They are unable to interact directly with the page because this would introduce enormous difficulties.
You will need to optimise your DOM manipulation code to speed up the processing time. It's worth consulting google for good practices.
One way to speed up execution is to build the table outside of the DOM and insert it into the document only when it is completed. This stops the browser having to re-draw on every insertion, which is where a lot of the time is spent.
You are not supposed to change the UI in a backgroundworker in general. You should always signal the main thread that the worker is finished, and return a result to the main thread whom then can process that.
HTML5 includes an async property for tags, and writing to your UI from there causes a blank page with just the stuff you wanted to write.
So you need another approach there.
I am not a JS guru, so I can't help you with an implementation, but at least you now have the concept :)
If you want to dig into the world of browser performance, the High Performance Web Sites blog has heaps of great info — including when javascripts block page rendering, and best practice to avoid such issues.
Up to know, for DB driven web sites, I've used php (and CodeIgniter) to populate the data within the page prior to rendering, what I'm thinking about doing now is to develop a javascript (via jquery) page, make it as interactive as possible and then connect to the db through ajax/json calls - so NO data populated to the screen prior to rendering.
WHY? sort of an idea that I can, some day, hook the same web page to different data sources - a true separation of page from data - linking only via ajax.
I think the biggest issue could be performance...are there other things to watch out for? What's the best approach to handling security (stateless/sessionless)?
The biggest question is accessibility. What about those people using screenreaders, for which Javascript doesn't work? What about those on mobile phones (non-smartphones), again with very limited or no Javascript functionality? What about those people who have simply disabled JS? Event these days, you simply can't assume that everyone can use JS.
I like the original idea, but perhaps this would be better done via a simple server-side wrapper, which calls out to your data source but which can be quickly and easily changed to point at a different one.
Definitely something I've considered doing but you'd probably want to develop some kind of framework (or see if someone already has) if you're going to do this. Brute forcing this kind of thing will lead to a lot of redundant code and unnecessary hair loss. Perhaps a jQuery plugin? I'd be very interested to see what you came up with.