I'm now trying for a few hours to understand why this happens, in my electron app i would like to use the nedb-promises package("nedb-promises": "^6.2.1",). Installation and configuration works so far but on every app start (dev & prod) the db file got replaced by a new / empty one. Should'nt the package not handle that?
I've took the code from this example:
https://shivekkhurana.medium.com/persist-data-in-electron-apps-using-nedb-5fa35500149a
// db.js
const {app} = require('electron');
const Datastore = require('nedb-promises');
const dbFactory = (fileName) => Datastore.create({
filename: `${process.env.NODE_ENV === 'development' ? '.' : app.getPath('userData')}/data/${fileName}`,
timestampData: true,
autoload: true
});
const db = {
customers: dbFactory('customers.db'),
tasks: dbFactory('tasks.db')
};
module.exports = db;
import db from './db'
....
// load task should not be important because file is already replaced when arriving here
ipcMain.handle('Elements:Get', async (event, args) => {
// 'Select * from Customers
let data = await db.customers.find({});
console.log(data);
return data;
})
...
// Set an item
ipcMain.handle('Element:Save', async (event, data) => {
console.log(data)
const result = db.customers.insertOne(data.item).then((newDoc) => {
console.log(newDoc)
return newDoc
}).catch((err) => {
console.log("Error while Adding")
console.log(err)
});
console.log(result);
return result;
})
Note: After "adding" newDoc contains the new element and when checking the file manually in the filesystem it is added. When i now close the app and open again the file got replaced.
I've checked the docs up and down - i have no clue what i'm doing wrong - thanks for your help.
Related
I have managed to use fleek to update IPFS via straight javascript. I am now trying to add this functionality to a clean install of a svelteKit app. I think I am having trouble with the syntax around imports, but am not sure what I am doing wrong. When I click the button on the index.svelte I get the following error
Uncaught ReferenceError: require is not defined
uploadIPFS upload.js:3
listen index.mjs:412..........(I truncated the error here)
A few thoughts
I am wondering if it could be working in javascript because it is being called in node (running on the server) but running on the client in svelte?
More Details
The index.svelte file looks like this
<script>
import {uploadIPFS} from '../IPFS/upload'
</script>
<button on:click={uploadIPFS}>
upload to ipfs
</button>
the upload.js file looks like this
export const uploadIPFS = () => {
const fleek = require('#fleekhq/fleek-storage-js');
const apiKey = 'cZsQh9XV5+6Nd1+Bou4OuA==';
const apiSecret = '';
const data = 'pauls test load';
const testFunctionUpload = async (data) => {
const date = new Date();
const timestamp = date.getTime();
const input = {
apiKey,
apiSecret,
key: `file-${timestamp}`,
data
};
try {
const result = await fleek.upload(input);
console.log(result);
} catch (e) {
console.log('error', e);
}
};
testFunctionUpload(data);
};
I have also tried using the other import syntax and when I do I get the following error
500
global is not defined....
import with the other syntax is
import fleekStorage from '#fleekhq/fleek-storage-js';
function uploadIPFS() {
console.log('fleekStorage',fleekStorage)
};
export default uploadIPFS;
*I erased the api secret in the code above. In future I will store these in a .env file.
Even more details (if you need them)
The file below will update IPFS and runs via the command
npm run upload
That file is below. For my version that I used in svelte I simplified the file by removing all the file management and just loading a variable instead of a file (as in the example below)
const fs = require('fs');
const path = require('path');
const fleek = require('#fleekhq/fleek-storage-js');
require('dotenv').config()
const apiKey = process.env.FLEEK_API_KEY;
const apiSecret = process.env.FLEEK_API_SECRET;
const testFunctionUpload = async (data) => {
const date = new Date();
const timestamp = date.getTime();
const input = {
apiKey,
apiSecret,
key: `file-${timestamp}`,
data,
};
try {
const result = await fleek.upload(input);
console.log(result);
} catch(e) {
console.log('error', e);
}
}
// File management not used a my svelte version to keep it simple
const filePath = path.join(__dirname, 'README.md');
fs.readFile(filePath, (err, data) => {
if(!err) {
testFunctionUpload(data);
}
})
I'm trying to listen to events emitted from the USDT contract Transfer function using ethers.js (not web3) in a node.js application.
When I run the script, the code runs with no errors and then quickly exits. I'd expect to get the event logs. I'm not sure what step I'm missing.
I've tested this script by calling the getOwner() method and console logging that result, this works fine, so my connection to mainnet is ok.
I'm using alchemy websocket.
My index.js file
const hre = require("hardhat");
const ethers = require('ethers');
const USDT_ABI = require('../abis/USDT_ABI.json')
async function main() {
const usdt = "0xdAC17F958D2ee523a2206206994597C13D831ec7";
const provider = new ethers.providers.WebSocketProvider("wss://eth-mainnet.ws.alchemyapi.io/v2/MY_API");
const contract = new ethers.Contract(usdt, USDT_ABI, provider)
contract.on('Transfer', (from, to, value) => console.log(from, to, value))
}
main()
.then(() => process.exit(0))
.catch(error => {
console.error(error);
process.exit(1);
});
My hardhat.config.js file
require("#nomiclabs/hardhat-waffle");
require('dotenv').config()
// This is a sample Hardhat task. To learn how to create your own go to
// https://hardhat.org/guides/create-task.html
task("accounts", "Prints the list of accounts", async () => {
const accounts = await ethers.getSigners();
for (const account of accounts) {
console.log(account.address);
}
});
// You need to export an object to set up your config
// Go to https://hardhat.org/config/ to learn more
/**
* #type import('hardhat/config').HardhatUserConfig
*/
module.exports = {
paths: {
artifacts: './src/artifacts',
},
networks: {
mainnet: {
url: "wss://eth-mainnet.ws.alchemyapi.io/v2/MY_API",
accounts: [`0x${process.env.PRIVATE_KEY}`]
},
hardhat: {
chainId: 1337
},
},
solidity: "0.4.8"
};`
I solved this by removing
.then(() => process.exit(0))
.catch(error => {
console.error(error);
process.exit(1);
});
and just calling main. It's recommended in the hardhat docs to use the .then and .catch code but when running a long running process like this script does with contract.on(), it causes the script to exit.
I do this:
const ethers = require('ethers');
const abi = [{...}]
const contractAddress = '0x000...'
const webSocketProvider = new ethers.providers.WebSocketProvider(process.env.ETHEREUM_NODE_URL, process.env.NETWORK_NAME);
const contract = new ethers.Contract(contractAddress, abi, webSocketProvider);
contract.on("Transfer", (from, to, value, event) => {
console.log({
from: from,
to: to,
value: value.toString(),
data: event
});
});
The event return all data related to event and transaction.
I am trying to use cypress functions in files different from the main one (which is the test file). I am wondering if it is possible.
Actually, I did this: this is the code in my test.js file; note that the first function is what I'm trying to do; the second function works normally and I have no problem with that. The reason why I am trying to do that is that I could need to reuse the same function multiple times.
my tree folders:
static_copied
pages
cities
Rome
New York
Bombay
Tokyo
London
Moscow
test.js file:
const pathCities = 'static_copied/pages/cities'
it('Retrieve cities from static and divide links', () => {
let cities1 = misc.retrieveCities()
console.log(cities1)
// this works
cy.task('readFolder', pathCities).then(cities => {
console.log('cities ', cities, typeof cities) // prints an array of cities, and 'object'
})
})
})
my misc.help.js file:
const pathCities = 'static_copied/pages/cities'
module.exports = {
retrieveCities,
[...]
}
[...]
function retrieveCities() {
cy.task('readFolder', pathCities).then(res => {
console.log('here', res, typeof res)
return res
})
}
and finally my cypress/plugins/index.js file:
const fs = require('fs')
// opens devTools by default
module.exports = (on, config) => {
[...]
// reads a folder, both folder and file names
on('task', {
readFolder(path) {
let foldersAnFiles = fs.readdirSync(path, 'utf8')
console.log('--->', foldersAnFiles, typeof foldersAnFiles)
let folders = []
// if its a file, exclude from result
foldersAnFiles.filter(function (folder) {
if (folder.indexOf('.') === -1) {
folders.push(folder)
}
})
return folders
},
})
}
What happens is that in misc.help.js file, print is correct: in retrieveCities() function, this console log console.log('here', res, typeof res) correctly prints an array.
But when i return it in the main test file, console.log(cities1) prints undefined.
Is there a way to pass to the main file my result?
Add this to your commands file and it then call cy.retrieveCities() in any test file and it will work.
Cypress.Commands.add('retrieveCities', () => {
return cy.task('readFolder', pathCities).then(res => {
return res
})
})
I've been working on a Node project that involves fetching some data from BigQuery. Everything has been fine so far; I have my credential.json file (from BigQuery) and the project works as expected.
However, I want to implement a new feature in the project and this would involve fetching another set of data from BigQuery. I have an entirely different credential.json file for this new dataset. My project seems to recognize only the initial credential.json file I had (I named them differently though).
Here's a snippet of how I linked my first credential.json file:
function createCredentials(){
try{
const encodedCredentials = process.env.GOOGLE_AUTH_KEY;
if (typeof encodedCredentials === 'string' && encodedCredentials.length > 0) {
const google_auth = atob(encodedCredentials);
if (!fs.existsSync('credentials.json')) {
fs.writeFile("credentials.json", google_auth, function (err, google_auth) {
if (err) console.log(err);
console.log("Successfully Written to File.");
});
}
}
}
catch (error){
logger.warn(`Ensure that the environment variable for GOOGLE_AUTH_KEY is set correctly: full errors is given here: ${error.message}`)
process.kill(process.pid, 'SIGTERM')
}
}
Is there a way to fuse my two credential.json files together? If not, how can I separately declare which credential.json file to use?
If not, how can I separately declare which credential.json file to use?
What I would do I would create a function which is the exit point to BigQuery and pass an identifier to your function which credential to generate, This credential will then be used when calling BigQuery.
The below code assume you changed this
function createCredentials(){
try{
const encodedCredentials = process.env.GOOGLE_AUTH_KEY;
To this:
function createCredentials(auth){
try{
const encodedCredentials = auth;
And you can use it like this
import BigQuery from '#google-cloud/bigquery';
import {GoogApi} from "../apiManager" //Private code to get Token from client DB
if (!global._babelPolyfill) {
var a = require("babel-polyfill")
}
describe('Check routing', async () => {
it('Test stack ', async (done, auth) => {
//Fetch client Auth from local Database
//Replace the 2 value below with real values
const tableName = "myTest";
const dataset = "myDataset";
try {
const bigquery = new BigQuery({
projectId: `myProject`,
keyFilename: this.createCredentials(auth)
});
await bigquery.createDataset(dataset)
.then(
args => {
console.log(`Create dataset, result is: ${args}`)
})
.catch(err => {
console.log(`Error in the process: ${err.message}`)
})
} catch (err) {
console.log("err", err)
}
})
})
I have an application that persists its state on disk, when any state change occur it reads from file the old state, it changes the state on memory and persists on disk again. But, the problem is that store function is writing on disk only after close program. I don't know why?
const load = (filePath) => {
const fileBuffer = fs.readFileSync(
filePath, "utf8"
);
return JSON.parse(fileBuffer);
}
const store = (filePath, data) => {
const contentString = JSON.stringify(data);
fs.writeFileSync(filePath, contentString);
}
To create a complete example, let's use load-dataset command, in the file "src/interpreter/index.js".
while(this.isRunning) {
readLineSync.promptCL({
"load-dataset": async (type, name, from) => {
await loadDataset({type, name, from});
},
...
}, {
limit: null,
});
}
In general, this calls loadDatasets, which reads json ou csv files.
export const loadDataset = async (options) => {
switch(options.type) {
case "csv":
await readCSVFile(options.from)
.then(data => {
app.createDataset(options.name, data);
});
break;
case "json":
const data = readJSONFile(options.from);
app.createDataset(options.name, data);
break;
}
}
The method createDataset() read the file on disk, update it and write again.
createDataset(name, data) {
const state = loadState();
state.datasets = [
...state.datasets,
{name, size: data.length}
];
storeState(state);
const file = loadDataset();
file.datasets = [
...file.datasets,
{name, data}
];
storeDataset(file);
}
Where methods loadState(), storeState(), loadDataset(), storeDataset() uses initial methods.
const loadState = () =>
load(stateFilePath);
const storeState = state =>
store(stateFilePath, state);
...
const loadDataset = () =>
load(datasetFilePath);
const storeDataset = dataset =>
store(datasetFilePath, dataset);
I'm using a package from npm called readline-sync to create a simple "terminal", I don't know if it causes some conflicts.
The source code is in the Github: Git repo. In the file "index.js", the method createDataset() calls loadState() and storeState(), which both uses the methods showed above.
The package readline-sync is used in the interpreter, here Interpreter file, which basic loops until exit command.
Just as note, I'm using Ubuntu 18.04.2 and Node.js 10.15.0. To make this code I saw an example, in the YouTube Video. This guy is using a MAC OS X, I really hope that the system won't be problem.