Passing object in Electron IPC yields old object version - javascript

I'm experiencing a behavior of Electron's IPC that seems weird. The setting is an app that, when clicking on a button, performs a download in the main process via IPC, like so (in renderer.js):
$('#Download').click(async function () {
await app.download();
updateUI();
}
The download function in main performs the download, then updates a main.js closure-scoped object named items, like so:
app.download = async function() {
const response = await fetch('https://...');
const result = await response.json();
await result.items.forEach(async element => {
items[element.id] = element.content;
}
console.log('Nr of items: ' + Object.keys(items).length);
}
After the download is complete, the UI should be updated. For this, there's a getter in main.js:
app.getItems = function() { return items; }
The renderer process uses it to update the UI:
function updateUI() {
var items = app.getItems();
console.log('Received items: ' + Object.keys(items).length);
}
However, given this flow, app.getItems always returns the items state before the download, i.e. missing the new items and showing a lower result for Object.keys(items).length). Only after I refresh the Electron app window, it returns the state including the downloaded items. I tried several methods to get the updated item list back from main to renderer:
Have the app.download function return the items: Same result (!) - the item count written to the console differs between main and renderer. I even output timestamps on both sides to assure the renderer is called after main.
Trigger the UI update from main using webContents.send('UpdateUI') and calling updateUI in an ipc.on('UpdateUI') handler: same result.
Add items to app.global in main, and retrieve it via remote.getGlobal() in renderer: same result.
It was only the final try that succeeded:
Pass items as a parameter with webContents.send('UpdateUI', items)
Now before refactoring the code to use this pattern I'd really be curious if anyone can explain this :-)
The only idea I have is that passing variables between main and renderer is not "by reference" (given they're two isolated processes), and that Electron's IPC does some kind of caching when deciding when to actually copy data. I carefully added await on every reasonable place (and the timestamps indicated I succeeded), so I'd be surprised if it's caused by any async side effects.

Related

Is there any way to get the parent node's name/value of a child node in a TreeView of VSCode extension(typescript/javascript)?

I'm building a VSCode extension connecting to a database and handling various DDL and DML operations. I'm pointing to a columnNode and upon right-clicking it, drop column command appears. If I pass the tableName hard-coded, everything is working fine.
But the problem here is, since I'm pointing to a columnNode, I'm not able to find a way to fetch the parent_node(i.e., tableName) of a columnNode. Is there any way to do it?
Here is my code snippet which is fetching the column node's details.
this.registerCommand(ONDBCommands.DROP_COLUMN, async (node: INode) => {
const data = await Promise.resolve(node.getTreeItem()).then((result) => {
return result;
});
// other relevant commands //
// I have to pass
client.dropColumn(tableName, columnName);
// so that the dropColumn() method internally creates a DDL statement like [`ALTER TABLE ${tableName} (DROP ${columnName})`] and executes it via client.
});

How to send setup information in electron to a renderer process when creating it?

I am making an electron app in which I am using a class to create instances of Renderer windows (I need multiple instances of the renderer). Each renderer window has to be connected to a specific data set.
The problem is, I can't see an obvious way in Electron to pass setup values to the renderer process when I set it up in the Main process. I am using the createWindow function and I don't see any way to connect any variable or object to it which can be recovered in the preload or the renderer Javascript file.
How do people typically handle that?
To illustrate further what I want to do.
Open dataset #1 in new window
Open dataset #2 in another window
Let people check and compare and then save data.
How do I pass the dataset info to the new window when I use the createWindow command to create it?
Thanks in advance.
Send each BrowserWindow their data like this:
// Main process.
win.webContents.send('data', {'a': 'b'... });
Recieve the data like this:
// Renderer process.
var ipcRenderer = require('electron').ipcRenderer;
ipcRenderer.on('data', function(evt, data) {
console.log(data); // {'a': 'b'}
});
See it from the documentation.
I found that if I created a renderer and had the main process send a message immediately, the message would sometimes be lost. So I have found it necessary to wait for the 'did-finish-load' event first.
//main
const createWindow = (windowData) => {
let win = new BrowserWindow({
show: false,
});
win.loadFile(path.join(__dirname, 'index.html'))
win.webContents.once('did-finish-load', () => {
win.webContents.send('doThing', windowData)
})
}

Electron: share DOM data (e.g. send img or canvas) using IPC (or anything else)

I'm trying to rescale a bunch of PIXI.js textures in a separate process, so render UI thread stays responsive. PIXI.js is unrelated to the question, but I want to provide a bit more context of what I'm doing.
Unfortunately, I can't make it work. At first I tried to spawn/fork a subprocess, but turned out I can share only JSON-encoded data between parent and a forked process. Bummer.
So, I decided to open a hidden browser window and repack them there.
Essentially, I do this in my render process:
// Start packing ... Say, urls.length > 1000, so it's quite a lot!
const textureArray = urls.map((url, i) => {
// get the texture (from cache, sync operation) + scale it
const texture = PIXI.Texture.fromImage(url, undefined, undefined, 0.3);
return texture;
});
// log data in the "hidden" window (I made it visible to access the logs, but it doesn't matter)
console.log("Done!")
console.log(textureArray)
// send to main process
ipcRenderer.send('textures', textureArray)
Main process plays ping-pong:
ipcMain.on('textures', (event, data) => {
console.log("Got textures in the main")
// sending data to another window
mainWindow.webContents.send("textures", data)
})
And just log the data upon receiving in the second renderer window (with main UI, which I'm trying to not block).
Now when I compare what I sent and what I receive I notice subtle changes: HTML elements are missing in the data I receive.
I understand why happens: the data I try to send contains the link to the DOM element, not the DOM element itself and it seems like IPC protocol doesn't know how to serialize it. (Just in case it matters: the data I tyr to send is not present in the DOM tree, but created with new Image() internally).
My question is, what is the way to exchange such data between electron windows, or, maybe, there's another way to solve my problem? I will appreciate any help!

JSON.stringify is very slow for large objects

I have a very big object in javascript (about 10MB).
And when I stringify it, it takes a long time, so I send it to backend and parse it to an object( actually nested objects with arrays), and that takes long time too but it's not our problem in this question.
The problem:
How can I make JSON.stringify faster, any ideas or alternatives, I need a javaScript solution, libraries I can use or ideas here.
What I've tried
I googled a lot and looks there is no better performance than JSON.stringify or my googling skills got rusty!
Result
I accept any suggestion that may solve me the long saving (sending to backend) in the request (I know its big request).
Code Sample of problem (details about problem)
Request URL:http://localhost:8081/systemName/controllerA/update.html;jsessionid=FB3848B6C0F4AD9873EA12DBE61E6008
Request Method:POST
Status Code:200 OK
Am sending a POST to backend and then in JAVA
request.getParameter("BigPostParameter")
and I read it to convert to object using
public boolean fromJSON(String string) {
if (string != null && !string.isEmpty()) {
ObjectMapper json = new ObjectMapper();
DateFormat dateFormat = new SimpleDateFormat(YYYY_MM_DD_T_HH_MM_SS_SSS_Z);
dateFormat.setTimeZone(TimeZone.getDefault());
json.setDateFormat(dateFormat);
json.configure(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY, true);
WebObject object;
// Logger.getLogger("JSON Tracker").log(Level.SEVERE, "Start");
try {
object = json.readValue(string, this.getClass());
} catch (IOException ex) {
Logger.getLogger(JSON_ERROR).log(Level.SEVERE, "JSON Error: {0}", ex.getMessage());
return false;
}
// Logger.getLogger("JSON Tracker").log(Level.SEVERE, "END");
return this.setThis(object);
}
return false;
}
Like This
BigObject someObj = new BigObject();
someObj.fromJSON(request.getParameter("BigPostParameter"))
P.S : FYI this line object = json.readValue(string, this.getClass());
is also very very very slow.
Again to summarize
Problem in posting time (stringify) JavaScript bottle nick.
Another problem parsing that stringified into an object (using jackson), and mainly I have svg tags content in that stringified object as a style column, and other columns are strings, int mainly
As commenters said - there is no way to make parsing faster.
If the concern is that the app is blocked while it's stringifying/parsing then try to split data into separate objects, stringily them and assemble back into one object before saving on the server.
If loading time of the app is not a problem you could try to ad-hoc incremental change on top of the existing app.
... App loading
Load map data
Make full copy of the data
... End loading
... App working without changes
... When saving changes
diff copy with changed data to get JSON diff
send changes (much smaller then full data)
... On server
apply JSON diff changes on the server to the full data stored on server
save changed data
I used json-diff https://github.com/andreyvit/json-diff to calc changes, and there are few analogs.
Parsing is a slow process. If what you want is to POST a 10MB object, turn it into a file, a blob, or a buffer. Send that file/blob/buffer using formdata instead of application/json and application/x-www-form-urlencoded.
Reference
An example using express/multer
Solution
Well just as most big "repeatable" problems go, you could use async!
But wait, isn't JS still single-threaded even when it does async... yes... but you can use Service-Workers to get true async and serialize an object way faster by parallelizing the process.
General Approach
mainPage.js
//= Functions / Classes =============================================================|
// To tell JSON stringify that this is already processed, don't touch
class SerializedChunk {
constructor(data){this.data = data}
toJSON() {return this.data}
}
// Attach all events and props we need on workers to handle this use case
const mapCommonBindings = w => {
w.addEventListener('message', e => w._res(e.data), false)
w.addEventListener('error', e => w._rej(e.data), false)
w.solve = obj => {
w._state && await w._state.catch(_=>_) // Wait for any older tasks to complete if there is another queued
w._state = new Promise((_res, _rej) => {
// Give this object promise bindings that can be handled by the event bindings
// (just make sure not to fire 2 errors or 2 messages at the same time)
Object.assign(w, {_res, _rej})
})
w.postMessage(obj)
return await w._state // Return the final output, when we get the `message` event
}
}
//= Initialization ===================================================================|
// Let's make our 10 workers
const workers = Array(10).fill(0).map(_ => new Worker('worker.js'))
workers.forEach(mapCommonBindings)
// A helper function that schedules workers in a round-robin
workers.schedule = async task => {
workers._c = ((workers._c || -1) + 1) % workers.length
const worker = workers[workers._c]
return await worker.solve(task)
}
// A helper used below that takes an object key, value pair and uses a worker to solve it
const _asyncHandleValuePair = async ([key, value]) => [key, new SerializedChunk(
await workers.schedule(value)
)]
//= Final Function ===================================================================|
// The new function (You could improve the runtime by changing how this function schedules tasks)
// Note! This is async now, obviously
const jsonStringifyThreaded = async o => {
const f_pairs = await Promise.all(Object.entries(o).map(_asyncHandleValuePair))
// Take all final processed pairs, create a new object, JSON stringify top level
final = f_pairs.reduce((o, ([key, chunk]) => (
o[key] = chunk, // Add current key / chunk to object
o // Return the object to next reduce
), {}) // Seed empty object that will contain all the data
return JSON.stringify(final)
}
/* lot of other code, till the function that actually uses this code */
async function submitter() {
// other stuff
const payload = await jsonStringifyThreaded(input.value)
await server.send(payload)
console.log('Done!')
}
worker.js
self.addEventListener('message', function(e) {
const obj = e.data
self.postMessage(JSON.stringify(obj))
}, false)
Notes:
This works the following way:
Creates a list of 10 workers, and adds a few methods and props to them
We care about async .solve(Object): String which solves our tasks using promises while masking away callback hell
Use a new method: async jsonStringifyThreaded(Object): String which does the JSON.stringify asynchronously
We break the object into entries and solve each one parallelly (this can be optimized to be recursive to a certain depth, use best judgement :))
Processed chunks are cast into SerializedChunk which the JSON.stringify will use as is, and not try to process (since it has .toJSON())
Internally if the number of keys exceeds the workers, we round-robin back to the first worker and overschedule them (remember, they can handle queued tasks)
Optimizations
You may want to consider a few more things to improve performance:
Use of Transferable Objects which will decrease the overhead of passing objects to service workers significantly
Redesign jsonStringifyThreaded() to schedule more objects at deeper levels.
You can explore libraries like fast-json-stringify which use a template schema and use it while converting the json object, to boost the performance. Check the below article.
https://developpaper.com/how-to-improve-the-performance-of-json-stringify/

initializeLayout and asynchronous loading

Currently I'm loading data asynchronously via data.js as provided by the Grid app template. The problem exists where groupedItems.js (the "Hub" page) calls _initializeLayout in the ready handler before the Data in the global WinJS namespace is set due to the asynchronous nature of the StorageFile class.
In data.js:
fileNames.forEach(function (val, index, arr) {
var uri = new Windows.Foundation.Uri('ms-appx:///data/' + val + '.geojson');
Windows.Storage.StorageFile.getFileFromApplicationUriAsync(uri).then(function (file) {
Windows.Storage.FileIO.readTextAsync(file).then(function (contents) {
// ... read, parse, and organize the data ...
// Put the data into the global namespace
WinJS.Namespace.define("Data", {
items: groupedItems,
groups: groupedItems.groups,
getItemReference: getItemReference,
getItemsFromGroup: getItemsFromGroup,
resolveGroupReference: resolveGroupReference,
resolveItemReference: resolveItemReference
});
});
});
}
In groupedItems.js:
// ...
// This function updates the ListView with new layouts
_initializeLayout: function (listView, viewState) {
/// <param name="listView" value="WinJS.UI.ListView.prototype" />
if (viewState === appViewState.snapped) {
listView.itemDataSource = Data.groups.dataSource;
listView.groupDataSource = null;
listView.layout = new ui.ListLayout();
} else {
listView.itemDataSource = Data.items.dataSource;
listView.groupDataSource = Data.groups.dataSource;
listView.layout = new ui.GridLayout({ groupHeaderPosition: "top" });
}
},
// ....
Seeing as I cannot move this code out of this file into the done() function of the Promise in data.js, how do I make the application wait until Data is initialized in the WinJS namespace prior to initializing the layout?
You have two asynchronous operations in progress (loading of the data and loading of the page) and one action (initializing the grid) that needs to happen only after both asynchronous operations are complete (page is loaded, data is available). There are a lot of approaches to solve this depending upon what architectural approach you want to take.
The brute force method is that you create a new function that checks to see if both the document is ready and the data is loaded and, if so, it calls _initializeLayout(). You then call that function in both places (where the doc is loaded and when the data is available) and it will execute only when both conditions are satisfied. It appears that you can tell if the data is loaded by checking for the existence of the global Data item and the its relevant properties.
There are more involved solutions that are architecturally a little cleaner. For example, in your doc ready handler, you can check to see if the data is available yet. If it is, you just initialize the layout. If, not you install a notification so that when the data is available, your callback will get called and you can then initialize the layout. If the data loading code doesn't currently have a notification scheme, then you create one that can be used by any client who wants to be called when the data has been loaded. This has the advantage over the first method in that the data loading code doesn't have to know anything about the grid. The grid does have to know about the data - which makes sense because the grid requires the data.
There are surely ways to use the promise/done system to do this too though I'm not personally familiar enough with it to suggest a good way to do it using that.

Categories