In order to speed up javascript interop calls from a Blazor WASM web app I need to know a little bit more about what conversions are available. So far I have found one example here of the Blazor.platform.toUint8Arrayfunction that appears to convert an unmarshalled array (pointer(?)) and convert it to an Uint8 array. Is there a place where one can find what functions are available?
Perhaps you may find something here...
Related
I wondering if someone could help me. I'm new to TensorFlow.js (JavaScript version).
I've built a neural network and want to add a regularization term to the cost function (loss function).
I can see the regularizers in the JavaScript API documentation, but can't figure out how to use them. The layers can have some sort of regularizer associated with them, but the cost function is not defined in the layers, so I don't think this is what I'm looking for.
I had a look through the source code on GitHub. I found some open tickets that mentioned regularization. I also found a regularization function that applied the L2 or L1 norm to a vector. I can try and write a function that augments the cost function, using the regularization function, but I don't want to go to that much effort when a function already exists. The python version of TensorFlow does contain what I'm looking for. Does anyone know if what I'm looking for already exists in the javascript version and if so, how I implement it? Thanks.
Assuming that TensorFlow operates the same way in Python and Javascript, it looks like you do add regularisation of the weights to the cost function, via the layers. From a mathematical point-of-view, this is not exactly obvious, hence my question.
If you search the internet for regularisation of the loss function, in TensorFlow.js, there is nothing. However, if you read the python tutorials, they do provide an answer. I particularly found this website useful,
https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/
Node.js documentation strongly discourages the usage of crypto.randomBytes(). However as I read in an answer of StackOverflow, in all methods of random string generation such as using timestamps etc. the best way to achieve highest entropy is crypto.randomBytes().
I would like to use this uuid strategy to generate validation keys in my node.js system. Is there any other better way performance-wise?
If you want to use CSPRNG, not really.
Using uuid was suggested, but it simply calls crypto.randomBytes(16) and converts it to hex string. randomBytes blocking isn't really a problem, because it offers asynchronous api as well (second arg is callback). When generating such small amounts of data, using the sync api might be faster though.
Docs do still mention lack of entropy possibly causing longer block than usual. It should only be a problem right after boot though and even in that case blocking can be avoided by using the asynchronous api.
The crypto.randomBytes() method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.
Hi basically what I want to do is passing a JavaScript array to a c module function, then the function modify the array in place, then JavaScript reads the modified array.
Current approach is use carrays.i and array_functions, create and converting Array to and from doubleArray and due to copying array, its giving me result worse than native JS. My array have about 41000 items.
C module: ~10ms(actual C function running time ~0.1ms)
JS module: ~3ms
For me, it's not possible to use doubleArray from very beginning (as this is a part of a larger process). So the question is how can I improve it? Is it possible to use TypedArray/ArrayBuffer? If yes then how?
following is my pseudo code
let cArray = MyCModule.new_doubleArray(array.length),
outArray = new Array(array.length);
arrayCopyJS2C(cArray, array);//written in JS and use a lot of time
MyCModule.MyCFunction(cArray, array.length);
arrayCopyC2JS(cArray, outArray);//also written in JS and use a lot of time
Yes, using an ArrayBuffer (with externalized backing store) is an efficient way to share a (number) array between JavaScript and C, because it doesn't require you to copy things around. That's assuming that you can use a TypedArray "from the beginning" on the JavaScript side; if the same limitation applies as to using a doubleArray from the beginning and you'd still have to copy, then the benefit will be smaller or nonexistent (depending on how fast you've made accesses to your doubleArray).
That said, V8 generates highly efficient code for operations on number arrays. I'm finding it hard to believe that the same function takes either 3ms in JS or 0.1ms in C. Can you share your JS implementation? If a C implementation is 30x as fast, then I bet the JS implementation could be improved a lot to get pretty close to that. Array operations are usually dominated by the time it takes to actually get the elements from memory, and no language has a particular advantage at that.
We would like to exchange PO files with translators, and convert these to i18next's native JSON format. This sounds pretty straightforward using the i18next-conv utility.
However, i18next expects more or less special keys; for example the dot has special meaning with regard to i18next namespaces. In contrast, gettext PO files are intended to carry source strings (in the original language) for their message IDs.
We know that message IDs can be arbitrary, and can thus be mapped to i18next keys directly, but we would like to use source strings and use PO files as they were intended for various reasons.
The main reason is that all the translation tools we would like to use, and probably those of all our translators, expect this. Using symbolic keys would make translating a real pain. In any case, we figured from the debates around this that this is mainly a matter of opinion; we kind of made ours, and we would like to put this restriction as a requirement for this question.
Is it really a bad idea to use source strings as i18next keys from a technical standpoint? How hard is it to escape them? Is there anything else than the dot and namespaces that we should care about?
If we determine that we want to keep using symbolic keys, is there an alternative to i18next-conv that can generate i18next JSON translation files from PO files using source strings as message IDs? We understand that we would most likely need to maintain a separate mapping between the symbolic names and the original language strings, and we're prepared to do so.
Moreover, we wonder about the general workflow. How is the original PO file generated? How are the translation files maintained?
If we use source strings as keys in i18next, what are the best tools to extract strings from the codebase? xgettext doesn't seem to support Javascript.
If we use symbolic keys in i18next, how can we best generate the original PO file? Is writing a POT file by hand a good practice?
Again, if we use symbolic keys, how can we easily invalidate translations whenever we update the original language strings? Are there tools for that?
We understand these questions are very basic, but we were a bit surprised at how little information we could find about i18next-gettext integration. The i18next-conv tool exists and works perfectly as advertised, but is it actually useful? Do people actually use it? If so, are our questions relevant?
Finally, are our expectations about the maturity of the system a little too high?
if you like to use source strings as keys just change the
nsseparator = ':::'
keyseparator = '::'
so . and : could be used inside the key without fear.
You could try using https://github.com/cheton/i18next-text. It allows you using i18next translation without having the key as strings, and you do not need to worry about i18n key naming. Furthermore, you can also register the i18n helper with Handlebars.
Following is a simple example:
var i18n = require('i18next');
// extends i18n object to provide a new _() method
i18n._ = require('i18next-text')._;
i18n._('Save your time and work more efficiently.');
Check out the demo on JSFiddle.
ternJS have several. JSON files defs which contains the definition of librarys. Can someone explain to me how I can best generate my own to my javascript libraries / or only definition objects?
I can not see that there is no common procedure for this?
There's a tool for this included in Tern. See condense at http://ternjs.net/doc/manual.html#utils . It runs Tern on your file and tries to output the types that it finds. It's far from flawless, but for simple programs it works well. For files with a complicated structure or interface, you'll often have to hand-write the definitions.
There are three ways I have thought about to solve your problem:
Using Abstract Syntax Tree Parser and Visitor
One way to solve your problem would be to use abstract syntax tree parser and visitor in order to automate the task of scanning through the code and documenting it.
The resources here will be of help:
-http://ramkulkarni.com/blog/understanding-ast-created-by-mozilla-rhino-parser/
-What is JavaScript AST, how to play with it?
You usually use a parser to retrieve a tree, and then use a visitor to visit all the nodes and do your work within there.
You will essentially have a tree representing the specific library and then you must write the code to store this in the def format you link to.
Getting a Documentation Generator and Modifying
Another idea is to download the source code for a documentation generator, e.g. https://github.com/yui/yuidoc/
By modifying the styling/output format you can generate "documentation" in the appropriate json format.
Converting Existing Documentation (HTML doc) into JSON
You can make a parser that takes a standard documentation format (I'm sure as Javadoc is one for java there should be one for javascript), and write a converter that exctracts the relevant information and stores in a JSON definition.