Cost Function Regularization in TensorFlow.js - javascript

I wondering if someone could help me. I'm new to TensorFlow.js (JavaScript version).
I've built a neural network and want to add a regularization term to the cost function (loss function).
I can see the regularizers in the JavaScript API documentation, but can't figure out how to use them. The layers can have some sort of regularizer associated with them, but the cost function is not defined in the layers, so I don't think this is what I'm looking for.
I had a look through the source code on GitHub. I found some open tickets that mentioned regularization. I also found a regularization function that applied the L2 or L1 norm to a vector. I can try and write a function that augments the cost function, using the regularization function, but I don't want to go to that much effort when a function already exists. The python version of TensorFlow does contain what I'm looking for. Does anyone know if what I'm looking for already exists in the javascript version and if so, how I implement it? Thanks.

Assuming that TensorFlow operates the same way in Python and Javascript, it looks like you do add regularisation of the weights to the cost function, via the layers. From a mathematical point-of-view, this is not exactly obvious, hence my question.
If you search the internet for regularisation of the loss function, in TensorFlow.js, there is nothing. However, if you read the python tutorials, they do provide an answer. I particularly found this website useful,
https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/

Related

Information about Blazor.platform functions

In order to speed up javascript interop calls from a Blazor WASM web app I need to know a little bit more about what conversions are available. So far I have found one example here of the Blazor.platform.toUint8Arrayfunction that appears to convert an unmarshalled array (pointer(?)) and convert it to an Uint8 array. Is there a place where one can find what functions are available?
Perhaps you may find something here...

JavaScript Iterative Variable Filtering

Is there a tool that allows us to search for a javascript variable just like a memory editor does: through iterative filtering either by exact value or change?
(Sorry for the long intro, but it's the best way I found to describe my use case.)
When I was about 14 yrs old, I would use a memory editor to find, monitor, and edit variables in games.
This allowed me to understand a bit better how computers work but also allowed me to have fun, changing the variables of the games to whatever I liked (offline, of course ;) )
The program would show me all variables.
I would then reduce the list of variables by repeatedly filtering: either by searching for its exact value (if it was known) or by change (increase, decrease).
Now I find myself wanting to do the same for Javascript. I've searched for a while, and I've tried different things, including searching for the variables in the window variable in console (please keep in mind I'm not a javascript developer) or using the debug function (which works great if you know where the variable is) but I haven't found a similar solution.
Is it true that no tool exists like this?
There so many use-cases:
debugging: finding where that number with that weird value is;
fun: edit variables just for plain fun, in games, etc;
learning how to code: I learned how to program by "hacking" around, and I know I'm not the only one ;)
probably many others I can't think about.
Does anyone know anything like this?

implement infinite lists in v8

I'm a computer science student and as part of a school projet I have been asked to either find an exploit in the v8 engine, make some really good optimisation or add a new feature.
I chose to add a new feature and here it is:
function* numbers() {
i = 1;
while (true) {
yield i++;
}
}
var gen = numbers();
var l = [...gen];
var n = l[42];
Putting it in words I want to have the possibility to use the destructuring syntax to create a list that can hold an infinite number of objects and access them.
It's possible to do it in Haskell and I want to try and do the same with JavaScript.
If of the developers at v8 could point me in the right direction it would be so great.
I already have a working environment, can compile the engine, read the source code, and run the debugger on the d8 binary file with symbols.
V8 developer here.
First off: just to be clear, stackoverflow is not a machine that does your homework. (You're only asking for "the right direction", that's okay.)
Secondly: V8 implements JavaScript as spec'ed, so any arbitrary "new feature" is not going to land in our repository, please be aware of that.
Thirdly: Keith has several good points. In particular, the syntax you propose is already valid JavaScript and eagerly evaluates the generator. Was your idea to switch to lazy evaluation iff the generator produces an infinite stream of values? Take a step back and think about the implications of that idea for a minute.
Finally, if you come up with workable syntax/semantics, then it'll still be a chunk of work to do this in V8, because there's no precedent of something similar. You'd probably want to use an elements interceptor, and store the generator in a private property. I think it would be much easier to polyfill the whole thing in pure JavaScript using a Proxy.
(It might be a good idea to reconsider your choice of project, but that's up to you. It's also quite a funky project description to begin with... what do they think how hard it is to "find an exploit or make some really good optimisation"? Do let us know if you find an exploit though!)

Classifier to Predict activities taking place

I have multiple datasets here that i took from Kaggle. There are multiple csv files and each csv file is made specifically for sit, stand, walking, running etc. The data is taken from sensors like accelerometers and gyroscopes. The values in datasets are of axis like x, y and z.
Sample Data
Here is a sample dataset of jogging. Now i need to make classifiers in my program so that my program can detect itself whether the data is of jogging, sitting, standing etc. I want to mix all the datasets in a single csv file and then upload it into my webapge and then i want the javascript code to start detecting whether a particular row is of sitting, standing, jogging etc. I don't want any code help but instead i just need a little explanation or a way to start coding it. How can i started making such classifier? I know it is kind of broad question but i think i have tried to explain myself in best way possible. Once my program has detected every row with specific activity it will count all the activities separately and then show it in a table format in webpage.
In order to answer properly to your question, it would be very helpful to know which is your level of understanding and experience with machine learning.
If you are a beginner I would suggest to try to run and understand a couple of tutorials that can be easily found on the web.
If you need an idea of which is the "standard" approach for machine learning development, I will try to give you a general idea of the process.
You can summarize the process in these main steps:
Data pre-processing-> Data splitting -> Feature selection -> Model Training -> Validation -> Deployment
Data pre-processing is meant to clean and format the data: removing NA values, decision about categorical variables, outlier analysis,.... This is a complex step that depends on the application. In your case I would start checking that the data in the different data-sets are homogeneous, i.e. the features have the same meaning across csv and corresponding features respect the same distribution. While the meaning of each feature should be explained in the description of your csv, the check of the distributions could be easily done plotting the box-plots for each feature and csv. If distributions of the same feature across different csv files don't overlap you should investigate further the issue.
An important step in the design of a good model is the splitting of the data. You should split your data in training/validation set (training/validation/test for a more comprehensive approach). This step allows you to train your model on the training set and test the model on the validation set computing unbiased performance of your model. I suggest here to become familiar with concepts as: Cross Validation, stratified-cross-validation, nested-cross-validation for hyper-parameter tuning, overfitting, bias,.... The Validation of the model will give you an idea of the expected performance that it will have on unseen data. If you are considering the use of more than one model, you can use the validation results to choose the "best" one. I suggest here a comparison using the confidence interval or if possible a significance test (e.g t-test, anova,...). Before the deployment the model is trained on all the available data.
The choice of the model depends on the data that you are using: number of samples, number of features, type of variable (numerical, categorical),....
I'm not an expert of javascript, but I believe (just a feeling) that python and R are more common choices for developing Machine learning applications. Both have libraries specifically developed for the task and you can find a lot of materials and tutorial around.
With a bit of more context I think that I could be more specific.
I hope it helps

How to store neural network knowledge data?

I am new to that area, so the question may seem strange. However before asking I've read bunch of introductory articles about what are the key points about in machine learning and what are the acting parts of neural networks. Including very useful that one What is machine learning. Basically as I got it - an educated NN is (correct me if it's wrong):
set of connections between neurons (maybe self-connected, may have gates, etc.)
formed activation probabilities on each connection.
Both things are adjusted during the training to fit expected output as close as possible. Then, what we do with an educated NN - we load the test subset of data into it and check how good it performs. But what happens if we're happy with the test results and we want to store the education results and not run training again later when dataset get new values.
So my question is - is that education knowledge is stored somewhere except RAM? can be dumped (think of object serialisation in a way) so that you don't need to educate your NN with data you get tomorrow or later.
Now I am trying to make simple demo with my dataset using synaptic.js but I could not spot that kind of concept of saving education in project's wiki.
That library is just an example, if you reference some python lib would be good to!
With regards to storing it via synaptic.js:
This is quite easy to do! It actually has a built-in function for this. There are two ways to do this.
If you want to use the network without training it again
This will create a standalone function of your network, you can use it anywhere with javascript without requiring synaptic.js! Wiki
var standalone = myNetwork.standalone();
If you want to modify the network later on
Just convert your network to a JSON. This can be loaded up anytime again with synaptic.js! Wiki
// Export the network to a JSON which you can save as plain text
var exported = myNetwork.toJSON();
// Conver the network back to useable network
var imported = Network.fromJSON(exported);
I will assume in my answer that you are working with a simple multi-layer perceptron (MLP), although my answer is applicable to other networks too.
The purpose of 'training' an MLP is to find the correct synaptic weights that minimise the error on the network output.
When a neuron is connected to another neuron, its input is given a weight. The neuron performs a function, such as the weighted sum of all inputs, and then outputs the result.
Once you have trained your network, and found these weights, you can verify the results using a validation set.
If you are happy that your network is performing well, you simply record the weights that you applied to each connection. You can store these weights wherever you like (along with a description of the network structure) and then retrieve them later. There is no need to re-train the network every time you would like to use it.
Hope this helps.

Categories