I have a Nx project with multiple packages in it. My goal is to produce graphs for the total CPU and memory used by all Nx tasks. As Nx starts separate processes for every package this seems non trivial.
The system on which I want to record is x86 Linux.
I looked at perf and top commands but I am not sure they can handle the subproceses.
After some research, I found this: https://nx.dev/recipes/other/performance-profiling
it produces JSON in a format for Chrome dev tools like so:
{
"name": "library-name",
"cat": "library-name",
"ph": "X",
"ts": 1000.1355486486,
"dur": 2000.17981765465,
"pid": 2684,
"tid": 1,
"args": {
"target": {
"project": "library-name",
"target": "build"
},
"status": "local-cache"
}
}
*numbers are not the real ones
If this is combined with a PID-based CPU and memory log for example from the top tool it is possible to create a CPU and memory profile per Nx job.
From there, the plotting is easy.
Related
I've upgraded to a Pro plan in hosting my Express app on Vercel. I expected that the maxDuration should have been increased from the default 10 seconds t0 60 seconds (for the pro plan) but it seems I have to explicitly specify the maxDuration value in my vercel.json file. Here's my vercel.json:
{
"version": 2,
"builds": [
{
"src": "./server.js",
"use": "#vercel/node"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "/server.js"
}
],
"functions": {
"controllers/*.js": {
"maxDuration": 60
},
"middleware/**/*.js": {
"maxDuration": 60
}
}
}
When I commit and successfully pushed to Github, Github shows an error-based notification from Vercel that says "Conflicting functions and builds Configuration" which clearly says that in my vercel.json file I can't use the "builds" and the "functions" command together; I can only use either of them.
I tried deleting the "builds" command. Upon pushing again, Vercel couldnt interprete/convert my Express.js controller functions/files into serverless functions. Though I know that the build command is definitely necessary.
I really need help. How can I do the configuration in a way that I can set a maxDuration so that my API doesn't timeout after the default 10 seconds?
Thanks.
electron-builder version: 20.9.2
Target: windows/portable
I'm building a portable app with electron-builder and using socket.io to keep a real-time connection with a backend service but I have an issue with the firewall. Because this is a portable app everytime the app is opened it looks that it is extracted in the temporary folder, which will generate a new folder (so the path to the app will be different) in every run which will make the firewall think that this is another app asking for the connection permissions. How can I change the extraction path when I run the app?
(This is the screen that I get every time I run the app)
This is my socket.io configuration
const io = require("socket.io")(6524);
io.on("connect", socket => {
socket.on("notification", data => {
EventBus.$emit("notifications", JSON.parse(data));
});
});
My build settings in package.json
"build": {
"productName": "xxx",
"appId": "xxx.xxx.xxx",
"directories": {
"output": "build"
},
"files": [
"dist/electron/**/*",
"!**/node_modules/*/{CHANGELOG.md,README.md,README,readme.md,readme,test,__tests__,tests,powered-test,example,examples,*.d.ts}",
"!**/node_modules/.bin",
"!**/*.{o,hprof,orig,pyc,pyo,rbc}",
"!**/._*",
"!**/{.DS_Store,.git,.hg,.svn,CVS,RCS,SCCS,__pycache__,thumbs.db,.gitignore,.gitattributes,.editorconfig,.flowconfig,.yarn-metadata.json,.idea,appveyor.yml,.travis.yml,circle.yml,npm-debug.log,.nyc_output,yarn.lock,.yarn-integrity}",
"!**/node_modules/search-index/si${/*}"
],
"win": {
"icon": "build/icons/myicon.ico",
"target": "portable"
}
},
Any idea about how at least I could specify an extraction path or make this extract it the execution folder?
BTW I already created an issue about this in the electron-builder repo
In version 20.40.1 they added a new configuration key unpackDirName
/**
* The unpack directory name in [TEMP](https://www.askvg.com/where-does-windows-store-temporary-files-and-how-to-change-temp-folder-location/) directory.
*
* Defaults to [uuid](https://github.com/segmentio/ksuid) of build (changed on each build of portable executable).
*/
readonly unpackDirName?: string
Example
config: {
portable: {
unpackDirName: "0ujssxh0cECutqzMgbtXSGnjorm",
}
}
More info #3799.
I need to find a node.js module or some script or make something to search though data and find the most relevant results. I was originally going to use google custom search API to search the steam community market but I think that's unnecessary and limited. But before that I was ripping apart the string and putting it back together and getting price data from steam individually, it worked practically perfect but it was messy and limited.
I now use an API to get all the steam csgo market data, i need to search it for the most relevant result.
A query for the below might look like 'stained bs' 'karambit stained fn' 'st stained ft'
"★ Karambit | Stained (Battle-Scarred)": {
"last_updated": 1439785289,
"quantity": 5,
"value": 18855
},
"★ Karambit | Stained (Factory New)": {
"last_updated": 1439785289,
"quantity": 5,
"value": 26499 // yea thats $265 for a purely cosmetic digital item
},
"★ Karambit | Stained (Field-Tested)": {
"last_updated": 1439785289,
"quantity": 10,
"value": 20000
},
"★ Karambit | Stained (Minimal Wear)": {
"last_updated": 1439785289,
"quantity": 10,
"value": 20223
},
"★ Karambit | Stained (Well-Worn)": {
"last_updated": 1439785289,
"quantity": 8,
"value": 19302
},
I am having trouble knowing what I should do.
The Node.js community have de Node Package Manager (npm) that is a command line utility and a public repository.
Using the search feature on npmjs.com you can find very useful modules.
Also github.com is a public site with tons of Node.js or just JavaScript modules. You can use their "explore" feature and filter by language to find useful client and server side code
If you already have installed npm. So you can use:
npm search <ANYTERM>
does anyone know if it is possible to change in NodeJS PM2 the number of cluster processes for an application at runtime?
regards
Philipp
You can use pm2 scale to scale vertically the number of process at runtime, note that it only work with cluster mode.
Example :
pm2 scale APPNAME 2 will scale the process to exactly 2 instances.
pm2 scale APPNAME +2 will add two process.
pm2 scale APPNAME -1 will remove one process.
source link
specify pm2 settings in json format:
{
"apps": [{
"name": "server",
"script" : "index.js",
"instances": 2,
"exec_mode: "cluster",
"cwd": "/path/to/script"
}]
}
start the server:
pm2 start application.json
suppose you want to add 2 more instances, just run the same command again:
pm2 start application.json
check the processes list:
pm2 list
to test that all 4 instances are run in cluster mode:
pm2 restart server
it will restart each of the 4 processes.
At runtime (after the application is started), there are 2 ways to "scale" the application:
1) With the command line (documented here under "Scaling your cluster in realtime"), like this:
pm2 scale <app name> <n>
Note that can be a consistent number which the cluster will scale up or down to. It can also be an addition such as pm2 scale app +3 in which case 3 more workers will be added to the cluster.
2) With the Programmatic API (docs are here, but scale is not documented). As it's not documented, here's how you do it:
pm2.scale(<APPNAME>, <SCALE_TO>, errback)
Note that is the number that will be scaled up or down to, not the number added or removed. Here's a full example of connecting to and scaling to 4 instances:
var pm2 = require('pm2');
pm2.connect(function (err) {
pm2.scale('appname', 4, function(err, procs) {
console.log('SCALE err: ', err);
console.log('SCALE procs: ', procs);
});
});
Recently I started using Kartograph. I am inexperienced in SVG, so the map creation is creating headaches for me. After initial trouble creating a world map that outlines country borders - similar to this - and a few other things(city regions and some decorating elements), my problem boils down to a undocumented - or at least I haven't found it in the docs - error. I guess it is related with my ignorance towards the kartograph.py framework.
The json file I provide Kartograph looks like that:
{
"proj": {
"id": "lonlat",
"lon0": 20,
"lat0": 0
},
"layers": {
"background": {
"special": "sea",
"charset": "latin-1",
"simplify": false
},
"graticule": {
"special": "graticule",
"charset": "latin-1",
"simplify": false,
"latitudes": 1,
"longitudes": 1,
"styles":{
"stroke-width": "0.3px"
}
},
"world":{
"src": "ne_50m_admin_0_countries.shp",
"charset": "latin-1",
"simplify": false
},
"lakes":{
"src": "Lakes.shp",
"charset": "latin-1",
"simplify": false
},
"trees":{
"src": "Trees.shp",
"charset": "latin-1",
"simplify": false
},
"depth":{
"src": "DepthContours.shp",
"charset": "latin-1",
"simplify": false
},
"cities":{
"src": "CityAreas.shp",
"charset": "latin-1",
"simplify": false
}
}
}
I know the output file will be huge and the generation will take ages, but it is just a test. I will experiment with the "simplify" option later. Much of the code in the file is based on this tutorial. Also, the empty simplify clause might not be necessary, but kartograph complained about the lack of the option, so I added it.
The command I use is this one:
kartograph world.json -o world.svg
It runs for some time(I guess, parsing all the input files etc.) before aborting. Now, the error I am facing is this one:
cli.py, in render_map()
71: K.generate(cfg, args.output, preview=args.preview, format=format, stylesheet=css) kartograph.py, in generate()
46: _map = Map(opts, self.layerCache, format=format) map.py, in __init__()
50: me.bounds_poly = me._init_bounds() map.py, in _init_bounds()
192: features = self._get_bounding_geometry() map.py, in _get_bounding_geometry()
257: charset=layer.options['charset']
get_features() got an unexpected keyword argument 'filter'
I tried looking at the file which throws the error(map.py), but I realized quickly that there's just too much interaction in the files for me to grasp things quickly.
I hope the data I provided is sufficient for someone more familiar with kartograph than me to track the error down.
UPDATE: The error is still valid. I tested it on both a MacBook Pro and an Asus Netbook now(Arch and Bodhi Linux, respectively).
Thanks in advance,
Carson
As far as I know, you can solve that problem by including a 'bounds' parameter. It is in deed very tricky, because according to the documentation (is it valid to call it 'documentation') this error should not appear, since the only required parameter is 'layers'. Also, how the bounds are defined depend apparently from the chosen projection. For your example I would use a simple polygon bounds.
I also had problems with that error. But, after many trials to set up everything, I noticed that apparently it only appears in the command-line version of Kartograph, and not when using Kartograph as a Python module in a script. I.e., try to include the json dictionary into a Python script where you import kartograph, like in the example here below.
I also put an example of filtering, for the record, because it was another thing that failed to work when using the command-line version of Kartograph.
# file: makeMap.py
from kartograph import Kartograph
K = Kartograph()
def myfilter(record):
return record['iso_a3'] in ["FRA","ITA","DEU"]
config = {
"layers": {
"mylayer": {
"src": "ne_50m_admin_0_countries.shp",
"filter": myfilter,
"attributes": {"iso_a3":"iso_a3", "name":"name", "id":"iso_a3"}
}
},
}
K.generate(config, outfile='world.svg')
Then, run the script as a Python script:
python makeMap.py