Given the first code snippet. It leverages the Jest framework and Supertest lib. It gets currently generated via a GUI where the user makes some selections enters some addition data and exports the JavaScript file. Testing endpoints is the only scope of the generation (hence the combination of Jest and Supertest).
Editing an exported file happens only via an external editor like VSCode. Now the request came in to be able to edit the JavaScript via both the GUI and editor like VSCode.
So the idea is to create an intermediate e.g. "descriptive" JSON file, like the second snippet below, that can be modified within both an editor and by the GUI. After that, it gets transformed into the final JavaScript code.
Are there other methods that would accomplish the same effect? Since an intermediate file requires "learning" a new syntax an structure.
Is there a library that provides such functionality?
Is this a known concept and if so what is it called?
const app = require('./server');
const supertest = require('supertest');
const request = supertest(app);
describe('api tests', () => {
it('gets the test endpoint', async done => {
const response = await request.get('/test');
expect(response.status).toBe(200);
expect(response.body).toBe('hello world');
done()
});
});
{
"descibe": {
"description": "api tests",
"it": {
"description": "gets the test endpoint",
"request": {
"method": "get",
"url": "/test",
"expect": [
{
"response.status": "toBe(200)"
},
{
"response.body": "toBe('hello world')"
}
]
}
}
}
}
Related
The following script is from a tutorial by Patrick Collins on creating NFTs. the source of this code is https://github.com/PatrickAlphaC/all-on-chain-generated-nft/blob/main/deploy/02_Deploy_RandomSVG.js
in the scripts that deploy contracts, the author uses a pattern similar to this:
let { networkConfig, getNetworkIdFromName } = require('../helper-hardhat-config')
const fs = require('fs')
module.exports = async ({
getNamedAccounts,
deployments,
getChainId
}) => {
const { deploy, get, log } = deployments
const { deployer } = await getNamedAccounts()
const chainId = await getChainId()
...
...
const VRFCoordinatorMock = await deployments.get('VRFCoordinatorMock')
...
...
I am trying to understand what's going on under the hood with:
{
getNamedAccounts,
deployments,
getChainId
}
It looks like some object is getting unpacked/deconstructed (?). I couldn't find any documentation about what it is, or if I did, its too complex for me to understand.
Can someone please tell me where this async function is getting exported to, and who will be requiring (i.e. calling) this function ?
If the above 3 properties were deconstructed from some object, what is that object? how does it fit in the bigger Hardhat picture?
I've recently been going through Patrick Collins blockchain course on FCC, and I wondered the same thing.
Hardhat is a development environment that allows you to run and create tasks such as yarn hardhat deploy. However, you can also add plugins to Hardhat that extend it's functionality.
In the course, Patrick uses the hardhat-deploy plugin which adds the fields you are trying to find documentation for. You won't find any documentation in regards to these fields on the Hardhat website since it's not baked into Hardhat. Here's a link to the plugin docs :)
The async function is being executed by hardhat-deploy when you run yarn hardhat deploy. The hardhat-deploy plugin will run any deploy scripts that are in /deploy. When these functions are executed Hardhat will automatically pass the hre object into the function as a parameter. hardhat-deploy extends the hre object by adding 4 new fields:
getNamedAccounts
getUnnamedAccounts
getChainId
deployments
Documentation for these fields can be found in the documentation I linked above.
In regards to what is object being passed in, this is a summary of it on the Hardhat website: "The Hardhat Runtime Environment, or HRE for short, is an object containing all the functionality that Hardhat exposes when running a task, test or script. In reality, Hardhat is the HRE."
So this part here:
module.exports = async ({
getNamedAccounts,
deployments,
getChainId
}) => {
Is exporting an anonymous asynchronous function, which takes an object as a parameter that has those three keys.
That object could look something like this for example:
{
getNamedAccounts: async () => { fetch(...) },
deployments: { get: (name) => { ... } },
getChainId: async () => { fetch(...) }
}
So lets say that the file where that export lies is named DeploymentCoordinator.js, one way you could use it from, say your index.js is:
var coordinator = require('./DeploymentCoordinator.js');
var someResult = await coordinator({
getNamedAccounts: async () => { fetch(...) },
deployments: { get: (name) => { ... } },
getChainId: async () => { fetch(...) }
});
Furthermore, if you look in here https://github.com/PatrickAlphaC/all-on-chain-generated-nft/blob/main/test/RandomSVG_test.js you can see that at least deployments and getChainId seems to simply come from
const { deployments, getChainId } = require('hardhat') at the top.
I'm following a node.js and Azure service bus tutorial.
I'm able to run the below as a node app, however, I am struggling to call a node function from my HTML page:
Note that all the files have correctly loaded with the node http-server module, however, when I call the main function, I get the following error:
ReferenceError: ServiceBusClient is not defined
Node.js function:
const { ServiceBusClient } = require("#azure/service-bus");
// Define connection string and related Service Bus entity names here
const connectionString ="";
const queueName = "";
async function main() {
const sbClient = ServiceBusClient.createFromConnectionString(
connectionString
);
const queueClient = sbClient.createQueueClient(queueName);
const sender = queueClient.createSender();
try {
for (let i = 0; i < 1; i++) {
const message = {
body: "{}",
label: "Contact",
userProperties: {
myCustomPropertyName: "my custom property value",
},
};
console.log(`Sending message: ${message.body}`);
await sender.send(message);
}
await queueClient.close();
} finally {
await sbClient.close();
}
}
main().catch((err) => {
console.log("Error occurred: ", err);
});
Any help much appreciated.
Summarize the comments from Pogrindis for other communities reference:
Node is backend business logic, the HTML is the front end and so there is no direct communication to the methods in Node. We could implement some webserver like express to allow http calls to be made to the node server, and from there we could call the business logic. And for the error of service bus, we need to implement the ServiceBusClientOptions interface.
To use Azure SDK libraries on a website, you need to convert your code to work inside the browser. You can do this using bundler such as rollup, webpack, parcel, etc. Refer to this bundling docs to use the #azure/service-bus library in the browsers.
Moreover, the code in your sample looks like it is using version 1.
Version 7.0.0 has been recently published. Refer to the links below.
#azure/service-bus - 7.0.0
Samples for 7.0.0
Guide to migrate from #azure/service-bus v1 to v7
I am planing to build a software with electron which should have many different windows. I already read a book and study the documentation to get familiar with the electron ipc module and how it works.
This concept works for me, with a manageable number of windows but even more windows my application gets even more confusing it becomes.
I am also new to javascript and Prototype-based object-oriented programming
I am trying to imagin how could i build a large software without loosing the overview of main.js.
The problem is that the main.js need specific ipc methods for every single window. I think of a solution in which i may hold the ipc methods in other javascript files with relevant names.
now i need to do in main.js for example:
ipcMain.on('window1-call1' .. {
//do stuff
}
ipcMain.on('window1-call2' .. {
//do stuff
}
ipcMain.on('window2-call1' .. {
//do stuff
}
and so on .....
Here is one example from my main.js, preload.js and renderer.js
to show how i build up a simple IPC example:
//main.js
ipc.on('open-directory-dialog', function (event) {
dialog.showOpenDialog(mainWindow, {
title: 'Select a image...',
properties: ['openFile'],
defaultPath: '/home',
buttonLabel: "Select...",
filters: [
{ name: 'Images', extensions: ['jpg', 'png', 'gif'] }
]
}, function (files) {
if (files) event.sender.send('selectedItem', files)
})
})
//preload.js
window.ipcFiledialog = function (channel) {
ipcRenderer.send('open-directory-dialog')
}
//callback
ipcRenderer.on('selectedItem', function (event, path) {
setSelectedItem(path)
})
//renderer.js
selectDirBtn.addEventListener('click', function (event) {
window.ipcFiledialog('open-directory-dialog')
})
function setSelectedItem (files) {
document.getElementById('selectedItem').innerHTML = files
}
With this approach i have to create every window in main.js (this will grow the main.js) and make sure i call a specific ipc method for the right window.
The callback method needs to call also a specific method for every window.
So i would need many methods and this construct does not look very flexible.
Is there any best practice to prevent main.js to grow into the “infinite” and get more flexible ipc methods ?
According to the expo sqlite documentation for react-native I can initialize a db like so:
const db = SQLite.openDatabase('db.db');
This works and I can update the db like so:
update() {
db.transaction(tx => {
tx.executeSql(
`select * from items where done = ?;`,
[this.props.done ? 1 : 0],
(_, { rows: { _array } }) => this.setState({ items: _array })
);
});
}
From my limited understanding this creates a database in the device. And then it's manipulated keeping all the db local.
I have a database with all the necessary tables already setup. How can I have it use the current database I already have setup?
For example: (not correct syntax)
const db = SQLite.openDatabaseIShipWithApp('mypath/mydb.db');
I couldn't find any documentation to help me with this.
The only reason I mention the above is because I already have the db with the tables and data.
Any help would be appreciated!
I was able to achieve this by using expo's FileSystem.downloadAsync:
first I import it since I'm using expo managed app:
import { FileSystem } from 'expo';
Then I download it from a server like so:
// load DB for expo
FileSystem.downloadAsync(
'http://example.com/downloads/data.sqlite',
FileSystem.documentDirectory + 'data.sqlite'
)
.then(({ uri }) => {
console.log('Finished downloading to ', uri)
})
.catch(error => {
console.error(error);
})
The first parameter is the uri for the location, the second one is where I'd like to place it. Here I am using documentDirectory.
If using local prepopulated database in assets:
import * as FileSystem from "expo-file-system";
import {Asset} from "expo-asset";
async function openDatabaseIShipWithApp() {
const internalDbName = "dbInStorage.sqlite"; // Call whatever you want
const sqlDir = FileSystem.documentDirectory + "SQLite/";
if (!(await FileSystem.getInfoAsync(sqlDir + internalDbName)).exists) {
await FileSystem.makeDirectoryAsync(sqlDir, {intermediates: true});
const asset = Asset.fromModule(require("../assets/database/mydb.sqlite"));
await FileSystem.downloadAsync(asset.uri, sqlDir + internalDbName);
}
this.database = SQLite.openDatabase(internalDbName);
}
This creates the SQLite directory and database if not exists. Otherwise FileSystem.downloadAsync() will throw an error on fresh installed app.
Some remarks:
You cannot use variable in require() (only string). See e.g. this.
You have to explicitly allow file extension .db or .sqlite to be loadable in Expo, see this. You have to create a file metro.config.js in root:
const defaultAssetExts = require("metro-config/src/defaults/defaults").assetExts;
module.exports = {
resolver: {
assetExts: [
...defaultAssetExts,
"db", "sqlite"
]
}
};
And may add following to app.json
"expo": {
"assetBundlePatterns": [
"**/*"
]
}
If want to delete loaded database (e.g. for testing) you have to clear whole Expo App data in Phone settings (deleting cache not sufficient). Or write a method like this:
async function removeDatabase() {
const sqlDir = FileSystem.documentDirectory + "SQLite/";
await FileSystem.deleteAsync(sqlDir + "dbInStorage.sqlite", {idempotent: true});
}
It's pretty straight forward
If you bundle your app, you have to move the Database from the asset folder to the document directory first. In order to do that, check if a folder named SQLite exists. If not, create it. Why do you need a folder called SQLite? That is because SQLite.openDatabase(databaseName) looks per default in FileSystem.documentDirectory + 'SQLite'. Then, when the folder is created, you can download the database from the asset folder. Make sure you have your database in a folder called asset. Locate the foler asset under src/asset of your app document tree. Also, make sure to configure your app.json and metro.config.js.
import * as SQLite from 'expo-sqlite';
import * as FileSystem from 'expo-file-system';
import { Asset } from 'expo-asset';
const FOO = 'foo.db'
if (!(await FileSystem.getInfoAsync(FileSystem.documentDirectory + 'SQLite')).exists) {
await FileSystem.makeDirectoryAsync(FileSystem.documentDirectory + 'SQLite');
};
await FileSystem.downloadAsync(
// the name 'foo.db' is hardcoded because it is used with require()
Asset.fromModule(require('../../asset/foo.db')).uri,
// use constant FOO constant to access 'foo.db' whereever possible
FileSystem.documentDirectory + `SQLite/${FOO}`
);
// Then you can use the database like this
SQLite.openDatabase(FOO).transaction(...);
// app.json
{
"name": "Your App name",
"displayName": "Your App name",
"assetBundlePatterns": [
"assets/**"
],
"packagerOpts": {
"assetExts": ["db"]
}
}
// metro config
const { getDefaultConfig } = require('#expo/metro-config');
const defaultConfig = getDefaultConfig(__dirname);
module.exports = {
resolver: {
assetExts: [...defaultConfig.resolver.assetExts, 'db', 'json'],
},
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: false,
},
}),
},
};
This is all extracted from the documentation of expo.
I don't believe this is possible in expo. There is a way to use an existing database if you are using a bare android project which involves writing some native code to copy the database from the project assets to the standard location on the phone (/data/ etc) for your application.
https://medium.com/#johann.pardanaud/ship-an-android-app-with-a-pre-populated-database-cd2b3aa3311f
I have always personally created the database myself with CREATE TABLE IF NOT EXISTS since sqlite requires you to define the schema before you query it. If you need to seed the database, this step would then be followed by additional steps to insert the required data.
In most cases, you will need to also check your reference data and update it from the server at regular intervals (it might even change from publishing your apk to someone downloading the app) and this code would also work when there is no reference data in the database.
There are a couple of services which try and take his hassle away from you (e.g. parse) but you will need to decide if you are happy with them hosting your data.) I haven't used them so not sure how this works exactly but I'm told it tries to solve the offline first type problems.
Remember that in future iterations you may need to modify the structure for future versions (to add fields etc) so you will probably need to define some code that loads when the application is first started, checks the database version and applies any changes that are required to bring the database up to the appropriate level.
I'm currently develop an App that is based on NativeScript and Angular2.
My screen freeze for while when my App fetching data through HTTP, and I'd like to put the fetching action into another thread.
I did a lot of search on the web, and all I got is the code in javascript like the official doc - https://docs.nativescript.org/angular/core-concepts/multithreading-model.html
Is there any way to implement the muli-threading with WebWorker in "Typescript"(which contain the support of Angular injected HTTP service) instead of the "Javascript" code(the code from the official doc)
It's appreciated if someone could give me some guide or hint, and it'll be great if I could got some relative example code.
Thanks.
There shouldn't be any big draw back for using WebWorkers in {N} + Angular but be aware that currently the WebWorker is not "exactly" compatible with Angular AoT compilation.
For me when creating an WebwWrker (var myWorker = new Worker('~/web.worker.js');) throws and error after bundling the application with AoT. I have seen soem talk about this in the community and possible the way to fix this is by editing the webpack.common.js and adding an "loaded" like so:
{
test: /\.worker.js$/,
loaders: [
"worker-loader"
]
}
Disclaimer: I have not tried this approach for fixing the error.
If someone have some problems adding workers in Nativescript with Angular and Webpack, you must follow the steps listed here.
Keep special caution in the next steps:
When you import the worker, the route to the worker file comes after nativescript-worker-loader!.
In the webpack.config.js keep caution adding this piece of code:
{
test: /.ts$/, exclude: /.worker.ts$/, use: [
"nativescript-dev-webpack/moduleid-compat-loader",
"#ngtools/webpack",
]
},
because is probable that you already have configured the AoT compilation, like this:
{
test: /(?:\.ngfactory\.js|\.ngstyle\.js|\.ts)$/,
use: [
"nativescript-dev-webpack/moduleid-compat-loader",
"#ngtools/webpack",
]
},
and you only need to add the exclude: /.worker.ts$/,
Finally, there is an example of a worker, in this case it use an Android native library:
example.worker.ts:
import "globals";
const context: Worker = self as any;
declare const HPRTAndroidSDK;
context.onmessage = msg => {
let request = msg.data;
let port = request.port;
let result = HPRTAndroidSDK.HPRTPrinterHelper.PortOpen("Bluetooth," + port.portName);
context.postMessage(result);
};
example.component.ts (../../workers/example.worker is the relative route to my worker):
import * as PrinterBTWorker from "nativescript-worker-loader!../../workers/example.worker";
import ...
connect(printer: HPRTPrinter): Observable<boolean> {
if (this.isConnected()){
this.disconnect(); //Disconnect first if it's already connected
}
return Observable.create((observer) => {
const worker = new PrinterBTWorker();
worker.postMessage({ port: printer });
worker.onmessage = (msg: any) => {
worker.terminate();
if (msg.data == 0) { // 0: Connected, -1: Disconnected
observer.next(true);
}
else {
observer.next(false);
}
};
worker.onerror = (err) => {
worker.terminate();
observer.next(false);
}
}).pipe(timeout(5000), catchError(err => of(false)));
}
Note: I use an Observable to make my call to the worker async and to add a timeout to the call to the native code, because in the case that it is not possible to connect to the printer (ex. it's turned off), it takes almost 10 seconds to notify, and this caused in my case the frezing of the app for all that time.
Important: It seem that it's necessary to run again the code manually every time that a change is made, because the worker isn't compiled using AoT.