i have a webpage where user can select one of different packages to buy from a list. Package details are coming from a database.
HTML Code
<div data-package='2346343' class="retail-package">Cost : 10$</div>
<div data-package='5465654' class="retail-package">Cost : 20$</div>
<div data-package='3455675' class="retail-package">Cost : 30$</div>
Jquery Code
$('.retail-package').on('click', function() {
$(this).addClass("selected-package");
var selectedPackage = $(this).data("package");
});
Now above code shows how we(specially i) normally select a particular thing out of a list when clicked, In this procedure, as you can see in HTML Code, i am giving out or showing the pakcageId to users i.e. anyone can do a inspect element in a browser and view or even manipulate the data-package attribute, for safety i do a server side check of selected data.
My Question
Is there a way to hide this data exposure, or is there any other cleaner way to accomplish this, because i have seen people using Angular, Webpack etc able to implement a list selection without giving out or showing any data which can be seen by inspect element feature in a browser.
Note : i am sorry if my question is too basic, if this cannot done using jquery what are other technologies which i can use ?
You may create a Map where keys are arbitrary, auto-incremented identifiers, and values are package numbers:
const idPackageMap = new Map()
// id generator: whenever you call it, "i" is incremented and returned
const id = (() => {
let i = 0
return () => ++i
})()
const addPackage = package =>
idPackageMap.set(id(), package)
addPackage(2346343)
addPackage(5465654)
addPackage(3455675)
console.log('contents: ', [...idPackageMap.entries()])
console.log('package number for id 2: ', idPackageMap.get(2))
Now, when you insert those <div> elements you may set the arbitrary identifier, and when you need to locate the actual package number is just about using Map#get: idPackageMap.get(1) (change 1 with any arbitrary identifier).
I have a not too big grid (30x20) with numbers in cells. I have to display all, calculate them in different ways (by columns, rows, some cells, etc.) and write values to some cells. This data is also written and read from db table fields. Everything is working, excluding simple (theoretically) mask tools.
In time of e.g. writing data to the field in the table I try to start mask and close it on finish. I used such a “masks” very often but only in this situation I have a problem and can’t solve it.
I prepare this mask the following way:
msk = new Ext.LoadMask(Ext.getBody(), { msg: "data loading ..." });
msk.show();
[writing data loops]
msk.hide();
msk.destroy();
I also tried to use grid obiect in place of Ext.getBody(), but without result.
I found also that the program behaves in a special way – loops which I use to write data to the table field are "omitted" by this mask, and it looks like loops are working in the background (asynchronously).
Would you be so kind as to suggest something?
No, no, no, sorry guys but my description isn’t very precise. It isn’t problem of loading or writing data to the database. Let’s say stores are in the memory but my problem is to calculate something and write into the grid. Just to see this values on the screen. Let me use my example once again:
msk = new Ext.LoadMask(Ext.getBody(), { msg: "data loading ..." });
msk.show();
Ext.each(dataX.getRange(), function (X) {
Ext.each(dataY.getRange(), function (Y) {
…
X.set('aaa', 10);
…
}
msk.hide();
msk.destroy();
And in such a situation this mask isn’t visible or is too fast to see it.
In the mean time I find (I think) a good description of my problem but still can’t find a solution for me. When I use e.g. alert() function I see this mask, when I use delay anyway, mask is too fast. Explanation is the following:
The reason for that is quite simple - JS is single threaded. If you modify DOM (for example by turning mask on) the actual change is made immediately after current execution path is finished. Because you turn mask on in beginning of some time-consuming task, browser waits with DOM changes until it finishes. Because you turn mask off at the end of method, it might not show at all. Solution is simple - invoke store rebuild after some delay.*
I have no idea how is your code looks in general but this is some tip that you could actually use.
First of all loading operations are asynchronously so you need to make that mask show and then somehow destroy when data are loaded.
First of all check if in your store configuration you have autoLoad: false
If yes then we can make next step:
Since Extjs is strongly about MVC design pattern you should have your controller somewhere in your project.
I suppose you are loading your data on afterrender or on button click event so we can make this:
In function for example loadImportantData
loadImportantData: function(){
var controller = this;
var store = controller.getStore('YourStore'); //or Ext.getStore('YourStore'); depends on your configuration in controller
var myMask = new Ext.LoadMask(Ext.getBody(), {msg:"Please wait..."});
myMask.show();
store.load({
callback: function (records, operation, success) {
//this callback is fired when your store load all data.
//then hide mask.
myMask.hide();
}
});
}
When data is loaded your mask will disappear.
If you have a reference to the grid, you can simply call grid.setLoading(true) to display a loading mask over the grid at any time.
I really didn't know how to explain my question in the title, so I tried.
Anyways, this is my problem. I have a webpage which is basically a puzzle. The basic premise of it is that when you visit a certain link, it will trigger a function and show the next piece.
Here's one of the functions that will show the piece -
function showone() {
var elem = document.getElementById("one");
if (elem.className = "hide") {
elem.className = "show"
}
}
The reason that it's built like this, is because the pieces are constructed and placed using an HTML table, using classes to hide and show them.
What I need to do, is somehow create a URL that will trigger a new piece. For example, "www.website.com/index.html?showone" is what I'd like. This would trigger the "showone" function.
I don't know how to do this though, and after a fair bit of searching, I'm more confused than I was to begin with.
The reason I'm using JavaScript to begin with, is that the page can't refresh. I understand that this might not be possible, in which case, I'm open to any suggestions on how I could get this to work.
Thanks in advance, any suggestions would be greatly appreciated.
-Mitchyl
Javascript web application frameworks can to this for you, they allow to build web application without refresh page.
For example you can use backbonejs it has Router class inside and it very easy to use.
code is easy as :
var Workspace = Backbone.Router.extend({
routes: {
"help": "help", // #help
"search/:query": "search", // #search/kiwis
"search/:query/p:page": "search" // #search/kiwis/p7
},
help: function() {
...
},
search: function(query, page) {
...
}
});
is also you can use angularjs it is big one that supports by Google.
Maybe this solution can help you?
$("a.icon-loading-link").click(function(e){
var link = $(e.target).prop("href"); //save link of current <a> into variable
/* Creating new icon-tag, for example $("<img/>", {src: "link/to/file"}).appendTo("next/dt"); */
e.preventDefault(); //Cancel opening link
return false; //For WebKit browsers
});
Building a single page / fat client application and I'm wondering what the best practice is for including and tracking using http://piwik.org/
I'd like to use Piwik in a way that is architecturally sound and replacable with a different library in the future.
It seems that there are two basic options for tracking with Piwik:
Fill up a global _paq array with commands, then load the script (it's unclear to me how to record future "page" views or change variables though)
Get and use var myTracker = Piwik.getTracker()
_paq approach:
myApp.loadAnalytics = function() { /* dynamically insert piwik.php script */ }
myApp.track = function(pageName) {
window._paq = window._paq || [];
_paq.push(['setDocumentTitle', pageName]);
_paq.push(["trackPageView"]);
}
myApp.loadAnalytics()
// Then, anywhere in the application, and as many times as I want (I hope :)
myApp.track('reports/eastWing') // Track a "page" change, lightbox event, or anything else
.getTracker() approach:
myApp.loadAnalytics = function() { /* dynamically insert piwik.php script */ }
myApp.track = function(pageName) {
myApp.tracker = myApp.tracker || Piwik.getTracker('https://mysite.com', 1);
myApp.tracker.trackPageView(pageName);
}
myApp.loadAnalytics()
// Then, anywhere in the application, and as many times as I want (I hope :)
myApp.track('reports/eastWing') // Track a "page" change, lightbox event, or anything else
Are these approaches functionally identical? Is one preferred over another for a single page app?
To have the tracking library used (eg. piwik) completely independent from your application, you would need to write a small class that will proxy the functions to the Piwik tracker. Later if you change from Piwik to XYZ you can simply update this proxy class rather than updating multiple files that do some tracking.
The Async code is a must for your app (for example a call to any 'track*' method will send the request)
The full solution using .getTracker looks like this:
https://gist.github.com/SimplGy/5349360
Still not sure if it would be better to use the _paq array instead.
I have a web-based documentation searching/viewing system that I'm developing for a client. Part of this system is a search system that allows the client to search for a term[s] contained in the documentation. I've got the necessary search data files created, but there's a lot of data that needs to be loaded, and it takes anywhere from 8-20 seconds to load all the data. The data is broken into 40-100 files, depending on what documentation needs to be searched. Each file is anywhere from 40-350kb.
Also, this application must be able to run on the local file system, as well as through a webserver.
When the webpage loads up, I can generate a list of what search data files I need load. This entire list must be loaded before the webpage can be considered functional.
With that preface out of the way, let's look at how I'm doing it now.
After I know that the entire webpage is loaded, I call a loadData() function
function loadData(){
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js started background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
recursiveCall();
}
function recursiveCall(){
if(file_array.length > 0){
var string = file_array.pop();
setTimeout(function(){$.getScript(string,recursiveCall);},1);
}
else{
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js stopped background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
}
}
What this does is processes an array of files sequentially, taking a 1ms break between files. This helps prevent the browser from being completely locked up during the loading process, but the browser still tends to get bogged down by loading the data. Each of the files that I'm loading look like this:
AddToBookData(0,[0,1,2,3,4,5,6,7,8]);
AddToBookData(1,[0,1,2,3,4,5,6,7,8]);
AddToBookData(2,[0,1,2,3,4,5,6,7,8]);
Where each line is a function call that is adding data to an array. The "AddToBookData" function simply does the following:
function AddToBookData(index1,value1){
BookData[BookIndex].push([index1,value1]);
}
This is the existing system. After loading all the data, "AddToBookData" can get called 100,000+ times.
I figured that was pretty inefficient, so I wrote a script to take the test.js file which contains all the function calls above, and processed it to change it into a giant array which is equal to the data structure that BookData is creating. Instead of making all the function calls that the old system did, I simply do the following:
var test_array[..........(data structure I need).......]
BookData[BookIndex] = test_array;
I was expecting to see a performance increase because I was removing all the function calls above, this method takes slightly more time to create the exact data structure. I should note that "test_array" holds slightly over 90,000 elements in my real world test.
It seems that both methods of loading data have roughly the same CPU utilization. I was surprised to find this, since I was expecting the second method to require little CPU time, since the data structure is being created before hand.
Please advise?
Looks like there are two basic areas for optimising the data loading, that can be considered and tackled separately:
Downloading the data from the server. Rather than one large file you should gain wins from parallel loads of multiple smaller files. Experiment with number of simultaneous loads, bear in mind browser limits and diminishing returns of having too many parallel connections. See my parallel vs sequential experiments on jsfiddle but bear in mind that the results will vary due to the vagaries of pulling the test data from github - you're best off testing with your own data under more tightly controlled conditions.
Building your data structure as efficiently as possible. Your result looks like a multi-dimensional array, this interesting article on JavaScript array performance may give you some ideas for experimentation in this area.
But I'm not sure how far you'll really be able to go with optimising the data loading alone. To solve the actual problem with your application (browser locking up for too long) have you considered options such as?
Using Web Workers
Web Workers might not be supported by all your target browsers, but should prevent the main browser thread from locking up while it processes the data.
For browsers without workers, you could consider increasing the setTimeout interval slightly to give the browser time to service the user as well as your JS. This will make things actually slightly slower but may increase user happiness when combined with the next point.
Providing feedback of progress
For both worker-capable and worker-deficient browsers, take some time to update the DOM with a progress bar. You know how many files you have left to load so progress should be fairly consistent and although things may actually be slightly slower, users will feel better if they get the feedback and don't think the browser has locked up on them.
Lazy Loading
As suggested by jira in his comment. If Google Instant can search the entire web as we type, is it really not possible to have the server return a file with all locations of the search keyword within the current book? This file should be much smaller and faster to load than the locations of all words within the book, which is what I assume you are currently trying to get loaded as quickly as you can?
I tested three methods of loading the same 9,000,000 point dataset into Firefox 3.64.
1: Stephen's GetJSON Method
2) My function based push method
3) My pre-processed array appending method:
I ran my tests two ways: The first iteration of testing I imported 100 files containing 10,000 rows of data, each row containing 9 data elements [0,1,2,3,4,5,6,7,8]
The second interation I tried combining files, so that I was importing 1 file with 9 million data points.
This was a lot larger than the dataset I'll be using, but it helps demonstrate the speed of the various import methods.
Separate files: Combined file:
JSON: 34 seconds 34
FUNC-BASED: 17.5 24
ARRAY-BASED: 23 46
Interesting results, to say the least. I closed out the browser after loading each webpage, and ran the tests 4 times each to minimize the effect of network traffic/variation. (ran across a network, using a file server). The number you see is the average, although the individual runs differed by only a second or two at most.
Instead of using $.getScript to load JavaScript files containing function calls, consider using $.getJSON. This may boost performance. The files would now look like this:
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
}
After receiving the JSON response, you could then call AddToBookData on it, like this:
function AddToBookData(json) {
BookData[BookIndex].push([json.key,json.values]);
}
If your files have multiple sets of calls to AddToBookData, you could structure them like this:
[
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 2,
"values" : [0,1,2,3,4,5,6,7,8]
}
]
And then change the AddToBookData function to compensate for the new structure:
function AddToBookData(json) {
$.each(json, function(index, data) {
BookData[BookIndex].push([data.key,data.values]);
});
}
Addendum
I suspect that regardless what method you use to transport the data from the files to the BookData array, the true bottleneck is in the sheer number of requests. Must the files be fragmented into 40-100? If you change to JSON format, you could load a single file that looks like this:
{
"file1" : [
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
// all the rest...
],
"file2" : [
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
// yadda yadda
]
}
Then you could do one request, load all the data you need, and move on... Although the browser may initially lock up (although, maybe not), it would probably be MUCH faster this way.
Here is a nice JSON tutorial, if you're not familiar: http://www.webmonkey.com/2010/02/get_started_with_json/
Fetch all the data as a string, and use split(). This is the fastest way to build an array in Javascript.
There's an excellent article a very similar problem, from the people who built the flickr search: http://code.flickr.com/blog/2009/03/18/building-fast-client-side-searches/