I am new to React.js. I am trying to quantify the time html document takes to render fully. Somebody please tell me where my approach is wrong, I find the results are not comparable at all.
Using React
var UList = React.createClass({
setText: function(){
var updatedItems = this.state.items;
var item = Math.abs(Math.random().toString().split('')
.reduce(function(p,c){return (p<<5)-p+c})).toString(36).substr(0,11);;
updatedItems.push(item);
this.setState({items: updatedItems});
},
getInitialState: function(){
return {
items: []
}
},
render: function(){
return (
<div id='component'>
<button id='bb' type="button" onClick={this.setText}>Set</button>
<List items={this.state.items}/>
</div>
);
}
});
var List = React.createClass({
render: function(){
return (
<ul>
{
this.props.items.map(function(item) {
return <li key={item}>{item}</li>
})
}
</ul>
)
}
});
ReactDOM.render(<UList text="kiran"/>, document.getElementById('container'));
console.log(new Date());
for(var i=0;i<1000;i++)
document.getElementById("bb").click();
console.log(new Date());
Using Javascript
setText = function(){
var item = Math.abs(Math.random().toString().split('')
.reduce(function(p,c){return (p<<5)-p+c})).toString(36).substr(0,11);
var node = document.createElement("LI");
var textnode = document.createTextNode(item);
node.appendChild(textnode);
document.getElementById("myList").appendChild(node);
};
console.log(new Date());
for(var i=0;i<1000;i++)
document.getElementById("bb").click();
console.log(new Date());
In this particular case, react may be slower overall in processing the 1000 clicks. What you are in essence measuring is more or less cycle time between 1 click and 1 update. In your one-to-one comparison direct javascript DOM updates are likely to be faster (maybe directly rendering HTML from server would be even faster).
Some notes:
Cycle time is not the only factor in comparing performance: when doing DOM updates, react in many cases leaves more CPU time available for the user, e.g. to scrolling while react is updating. This in not measured by your test.
Where react really shines is in doing updates: if you delete, update, add many different items in the list of 1000, react is likely to be much faster than the pure-javascript variant.
UPDATE:
One of the key reasons for facebook to build react is because typical DOM updates are really slow. So react minimizes DOM updates, through its diff engine.
If you want to know more about performance, look here for comparison of react vs angular vs knockout.
This video from an angular conference made me aware that there is much more to performance than cycle time: how different processes behave during one cycle has important implications as well.
A sample test version (nice and clean) Rich Ayotte here, compares react also with raw javascript. It shows that raw javascript is actually faster than react (in rendering a long list).
The bottomline:
for small and simple applications, raw is probably faster
but if you want to maintain performance also if your application grows bigger and more complex, react is a safe bet.
Related
For my Angular 1.5 file upload component, I implemented a preview.
When the user selects a file, it is read by a FileReader and on load end, I store the content in a controller variable like
fileReader.onloadend = function(ev) {
$rootScope.$evalAsync(function($scope) {
$ctrl.rawFileContent = ev.target.result;
});
};
The template shows it to the user
<pre ng-show="$ctrl.rawFileContent">{{$ctrl.rawFileContent}}</pre>
This worked pretty well for my test during development.
The Problem
Real world users are going to upload >500kB files and then it takes minutes until the template is rendered.
Profiling showed, that some internal function in 'content-script.bundle.js' takes by far most of the time.
Update:
And worst of all ;) it has to run in IE11.
What I tried so far
using one time binding {{::$ctrl.rawFileContent}} which yielded no improvement
fetching the <PRE> and setting its inner text via direct DOM manipulation, which still takes long and seemed a bit 'non-angulary' to me.
<pre ng-show="$ctrl.rawFileContentReady" class="previewTarget"></pre>`
$rootScope.$evalAsync(function($scope) {
var previewTarget = document.getElementsByClassName("previewTarget")[0];
previewTarget.innerHTML = ev.target.result;
});
Do you have any ideas, why this takes so long?
How can I speed it up?
Or, as a workaround, can I track the progress somehow to inform the user?
This link helps to find how to do progress
EDIT: I think I should make things more obvious.
What I am trying to do is to make the function that displays the "time ago" from the submitted date of the post auto refresh every minute so that it stays relatively accurate even if the template is not re rendered.
I'd like to auto update my timeago value in my template but it is not working.
I've tried to set up my code with a reactive function, based on the answer to a similar question (https://stackoverflow.com/a/17933506)
Here's my code:
var timeAgoDep = new Deps.Dependency(); // !!!
var timeAgo;
var timeAgoInterval;
Template.postItem.created = function() {
function getTimeago() {
//var now = new Date();
timeAgo = moment(this.submitted).twitter();
timeAgoDep.changed(); // !!!
};
getTimeago(); /* Call it once so that we'll have an initial value */
timeAgoInterval = Meteor.setInterval(getTimeago, 5000);
};
Template.postItem.posted = function() {
timeAgoDep.depend(); // !!!
return timeAgo;
};
Template.postItem.destroyed = function() {
Meteor.clearInterval(timeAgoInterval);
};
I'm pretty sure that the problem comes from this.submitted because if I assign timeAgo = now for example, it will display the time and update like it's supposed to.
I also know that moment(this.submitted).twitter() works fine because when all I do is return it through a helper, it works.
A much better way to do this is to just embrace Meteor's reactivity and render time-dependent values reactively. In your case, the problem is that you are invalidating the dependency once every 5 seconds for each postItem rendered, which will quickly turn into a huge mess.
See https://github.com/mizzao/meteor-timesync for a package that provides reactive time variables on the client (and they are synced to server time too!) It's basically doing what you want, but in a cleaner way. (Disclaimer: I wrote this package.)
You can use moment in the same way to compute the actual string to display. For example, get rid of all the other stuff and just use
Template.postItem.posted = function() {
return moment(this.submitted).from(TimeSync.serverTime());
}
The moment().twitter() extension doesn't seem like a good choice because it only uses the current client time and doesn't allow you to pass in a specific (i.e. server-synced) time or reactive value.
I have created a tree control using kendo TreeView.it has more than 10,000 nodes and i have used loadOnDemand false when creating Tree.
I am providing a feature to expand the tree by its level, for this i have created a method which takes the parameter "level" as number and expand it accordingly and user can enter 15 (max level) into the method, it works fine with 500 to 600 nodes for all the levels but when tree has more than 5000 nodes than if user is trying to expand above the 2nd level nodes then browser hangs and shows not responding error.
Method which i have created to expand the tree is :-
function ExapandByLevel(level, currentLevel) {
if (!currentLevel) {
currentLevel = 0;
}
if (level != currentLevel) {
var collapsedItems = $("#treeView").find(".k-plus:visible");
if (collapsedItems.length > 0) {
setTimeout(function () {
currentLevel++;
var $tree = $("#treeView");
var treeView = $tree.data("kendoTreeView");
var collapsedItemsLength = collapsedItems.length;
for (var i = 0; i < collapsedItemsLength; i++) {
treeView.expand($(collapsedItems[i]).closest(".k-item"));
}
ExapandByLevel(level, currentLevel);
}, 100);
}
else {
//console.timeEnd("ExapandByLevel");
hideLoading();
}
}
if (level == currentLevel) {
hideLoading();
}
}
call above given method like this:-
ExapandByLevel(15);
here 15 is level to expand in tree.
when tree has more than 5000 nodes than if user is trying to expand above the 2nd level nodes then browser hangs and shows not responding error.
please suggest any way to do this,what i want is expand the tree which can contains more than 5000 nodes.
I had a similar problem with kendo TreeView, when I wanted to load a tree with 30,000 nodes. The browser would freeze for a long time to load this number of nodes even when loadOnDemand was set to true.
So we decided to implement the server-side functionality for expanding nodes, and that's what you should do. You need to have 2 changes in your existing code.
Change your tree use server side Expand method.
When you call expand, you should make sure the node is expanded.
These two steps will be explained below. The thing you should know is, this way your browser doesn't hang at all, but it may take some time to complete the operation, because there will be so many webservice calls to the server.
Change your tree to use server side Expand method:
Please see Kendo UI's demos for Binding to Remote Data in this
link. Note that loadOnDemand should be set to true. In addition the server side Expand web service should be implemented too.
When you call expand, you should make sure the node is expanded:
In order to do this, there should be an event like Expanded defined in Kendo UI TreeView, but unfortunately there is none, except Expanding event. Using setTimeout in this case is not reliable, because the network is not reliable. So we ended up using a while statement to check that the node's children are created or not. There might be a better solution for this, however this satisfies our current requirement. Here's the change you should make when expanding nodes:
if (collapsedItems.length > 0) {
currentLevel++;
var $tree = $("#treeView");
var treeView = $tree.data("kendoTreeView");
var collapsedItemsLength = collapsedItems.length;
for (var i = 0; i < collapsedItemsLength; i++) {
var node = $(collapsedItems[i]).closest(".k-item")
if (!node.hasChildren)
continue; // do not expand if the node does not have children
treeView.expand(node);
// wait until the node is expanded
while (!node.Children || node.Children.length == 0);
}
ExapandByLevel(level, currentLevel);
}
You can also do the expand calls in a parallel way in order to decrease the loading time, but then you should change the way you check if all the nodes are expanded or not. I just wrote a sample code here that should work fine.
Hope this helps.
The solution to your problem is pretty simple: Update the version of Kendo UI that you are using since they have optimized (a loooooooooot) the code for HierarchicalDataSource and for TreeView.
Check this: http://jsfiddle.net/OnaBai/GHdwR/135/
This is your code where I've change the version of kendoui.all.min.js to v2014.1.318. I didn't even changed the CSS (despite you should). You will see that opening those 5000 nodes is pretty fast.
Nevertheless, if you go to 10000 elements you will very likely consider it slow but sorry for challenging you: do you really think that 10000 nodes tree is User Friendly? Is a Tree the correct way of presenting such a huge amount of data?
I'm playing around with benchmarking in order to see how much of the pie my custom javascript takes up vs the things out of my control as far as optimization is concerned: dom/network/painting, etc.
I would use chrome's dev tools for this, but i don't see an accurate pie chart since my functions do ajax calls and therefore network is added to the javascript portion of the pie (as well as dom and other 'out of my control' stuff).
I'm using benchmarkjs (http://benchmarkjs.com/) to test this line:
document.querySelector("#mydiv").innerHTML = template(data);
where template is a precompiled handlebars template.
To the question...
I've broken the process down into 3 parts and took the mean of each run:
document.querySelector("#mydiv") - 0.00474178430265463
myDiv.innerHTML = already_called_template - 0.005627522903454419
template(data) - 0.004687963725254854
But all three together (the one liner above) turns out to be: 0.005539341673858488
Which is less than the lone call to set innerHTML.
So why don't the parts equal the sum? Am I doing it wrong?
Sample benchmark below (i'm using deferred as a constant because i plan to add ajax next):
var template = Handlebars.compile(html);
var showStats = function(e) { console.log(e.target.stats.mean); };
var cachedDiv = document.querySelector('#myDiv');
var cachedTemplate = template(data);
new Benchmark('just innerHTML', function(deferred) {
cachedDiv.innerHTML = cachedTemplate;
deferred.resolve();
}, {defer: true, onComplete: showStats}).run();
new Benchmark('full line', function(deferred) {
document.querySelector('#myDiv').innerHTML = template(users);
deferred.resolve();
}, {defer: true, onComplete: showStats}).run();
http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html
Turns out the jit is probably doing some crazy sh.... stuff.
The answer would be to try to outsmart it, but I think I should adjust my strategy instead.
I have a list linked to a store filled with Facebook friends. It contains around 350 records.
There is a searchfield at the top of the list which triggers the following function on keyup:
filterList: function (value) {
// console.time(value);
if (value === null) return;
var searchRegExp = new RegExp(value, 'g' + 'i'),
all = Ext.getStore('Friends'),
recent = Ext.getStore('Recent'),
myFilter;
all.clearFilter();
recent.clearFilter();
// console.log(value, searchRegExp);
myFilter = function (record) {
return searchRegExp.test(record.get('name'));
}
all.filter(myFilter);
recent.filter(myFilter);
// console.timeEnd(value);
},
Now, this used to work fine with ST2.1.1 but since I upgraded the app to ST2.2. It's really slow. It even makes Safari freeze and crash on iOS...
This is what the logs give :
t /t/gi Friends.js:147
t: 457ms Friends.js:155
ti /ti/gi Friends.js:147
ti: 6329ms Friends.js:155
tit /tit/gi Friends.js:147
tit: 7389ms Friends.js:155
tito /tito/gi Friends.js:147
tito: 7137ms
Does anyone know why it behaves like this now, or does anyone have a better filter method.
Update
Calling clearFilter with a true paramater seems to speed up things, but it's not as fast as before yet.
Update
It actually has nothing to do with filtering the store.
It has to do with rendering list-items. Sencha apparently create a list-item for every record I have in the store instead of just creating a couple of list-items and reusing them
Could there be an obvious reason it's behaving this way ?
Do you have the "infinite" config on your list set to true?
http://docs.sencha.com/touch/2.2.0/#!/api/Ext.dataview.List-cfg-infinite
You probably don't want the list rendering 300+ rows at once, so setting that flag will reduce the amount of DOM that gets generated.