Neo4j-driver: Cannot read property 'driver' of undefined - javascript

I pretty much copied the example and adjusted the database query. I dont understand why the driver is not recognized?
Version:
Node: v11.13.0
neo4j-driver: "^1.7.5"
I get the Error:
var driver = neo4j.v1.driver(
^
TypeError: Cannot read property 'driver' of undefined
My Code:
var neo4j = require('neo4j-driver').v1;
var driver = neo4j.v1.driver(
'bolt://localhost:7687',
neo4j.auth.basic('neo4j', 'Neo4j')
)
var session = driver.session()
session
.run('MATCH (n:Person) return n', {
//nameParam: 'Alice'
})
.subscribe({
onNext: function(record) {
console.log(record.get('n'))
},
onCompleted: function() {
session.close()
},
onError: function(error) {
console.log(error)
}
})

You probably meant to do this:
var neo4j = require('neo4j-driver').v1;
var driver = neo4j.driver(
...
Or, if for some reason you want to be able to explicitly specify the library version every time you use it, do this:
var neo4j = require('neo4j-driver');
var driver = neo4j.v1.driver(
...

their docs seem screwed up, I had the exact same problem.
remove the v1 and it works. not sure if this defaults to a different version of the driver or something...
let config = require("./config")[env]
const uri = 'bolt://localhost:7687'
const neo4j = require('neo4j-driver');
const driver = neo4j.driver(uri, neo4j.auth.basic(config.username, config.password));
FWIW the way they define a config file is also broken. the node onboarding is pretty much a turn-off.

Related

How can I get the online users list using converse Strophe environment in react js

Is there any method to get the online user list using converse js? I found it is possible by Strophe.js which is already implemented on converse.js.
I created a converse plugin but don't know how can I show the online users.
export const moderationActions = () => {
window.converse.plugins.add('moderation-actions', {
dependencies: [],
initialize: function () {
const _converse = this._converse;
const Strophe = window.converse.env.Strophe;
console.log(Strophe, 'Strophe');
},
});
};
You can try the following:
await api.waitUntil('rosterContactsFetched');
const online_contacts = _converse.roster.models.filter(m => m.presence.get('show') === "online");

Can the tinylicious server be launched at a port other than 3000?

Can the tinylicious server be launched at a port other than 3000? I've tried something like "PORT=4100 tinylicious" and I can see the terminal log saying:
#federation/shell-app: [1] info: Listening on port 4100 {"label":"winston","timestamp":"2021-03-08T19:23:37.861Z"}
but later it fails within my code, indicating something went wrong with the service call:
main.js:15815 ERROR TypeError: Cannot read property 'shapeClicked' of undefined
at Layer.onClick [as zzClickFunc] (collabmap.component.js:45)
at JS:24817
at Array.<anonymous> (JS:8190)
at window.<computed> (JS:1111)
at Object.<anonymous> (JS:51778)
at j (JS:51777)
and indeed, the Network tab reveals it's still posting via 3000:
Request URL: http://localhost:3000/documents/tinylicious
Referrer Policy: strict-origin-when-cross-origin
I know tinylicious is not the full Fluid server and it's just for testing purposes, so it might have been hardwired to 3000, but maybe someone has an idea how to launch it on a different port.
Tinylicious server port is definitely configurable.
If you override their libraries, then you will be able to run your app using any port possible.
You must've noticed this function:
getTinyliciousContainer();
within its libraries - get-tinylicious-container and tinylicious-driver, you will see one of their files in the tinylicious-driver:
insecureTinyliciousUrlResolver.ts, in which every damn host:port is hardcoded to localhost:3000.
Therefore, just copy their code from their getTinyliciousContainer and tinylicious-driver, and make your own version of getTinyliciousContainer. In the future, you need to copy these codes to configure for Routerlicious anyways, as Tinylicious is very lightweight, and is recommended just for testing purposes.
The file you need to modify in #fluidframework/tinylicious-driver is insecureTinyliciousUrlResolver.ts:
export class InsecureTinyliciousUrlResolver implements IUrlResolver {
public async resolve(request: IRequest): Promise<IResolvedUrl> {
const url = request.url.replace(`http://${serviceHostName}:${servicePort}/`,"");
const documentId = url.split("/")[0];
const encodedDocId = encodeURIComponent(documentId);
const documentRelativePath = url.slice(documentId.length);
const serviceHostName = "YOUR-PREFERRED-HOST-NAME";
const servicePort = "YOUR-PREFERRED-PORT";
const documentUrl = `fluid://${serviceHostName}:${servicePort}/tinylicious/${encodedDocId}${documentRelativePath}`;
const deltaStorageUrl = `http://${serviceHostName}:${servicePort}/deltas/tinylicious/${encodedDocId}`;
const storageUrl = `http://${serviceHostName}:${servicePort}/repos/tinylicious`;
const response: IFluidResolvedUrl = {
endpoints: {
deltaStorageUrl,
ordererUrl: `http://${serviceHostName}:${servicePort}`,
storageUrl,
},
tokens: { jwt: this.auth(documentId) },
type: "fluid",
url: documentUrl,
};
return response;
}
public async getAbsoluteUrl(resolvedUrl: IFluidResolvedUrl, relativeUrl: string): Promise<string> {
const documentId = decodeURIComponent(resolvedUrl.url.replace(`fluid://${serviceHostName}:${servicePort}/tinylicious/`, ""));
/*
* The detached container flow will ultimately call getAbsoluteUrl() with the resolved.url produced by
* resolve(). The container expects getAbsoluteUrl's return value to be a URL that can then be roundtripped
* back through resolve() again, and get the same result again. So we'll return a "URL" with the same format
* described above.
*/
return `${documentId}/${relativeUrl}`;
}
private auth(documentId: string) {
const claims: ITokenClaims = {
documentId,
scopes: ["doc:read", "doc:write", "summary:write"],
tenantId: "tinylicious",
user: { id: uuid() },
// #ts-ignore
iat: Math.round(new Date().getTime() / 1000),
exp: Math.round(new Date().getTime() / 1000) + 60 * 60, // 1 hour expiration
ver: "1.0",
};
const utf8Key = { utf8: "12345" };
return jsrsasign.jws.JWS.sign(null, JSON.stringify({ alg:"HS256", typ: "JWT" }), claims, utf8Key);
}
}
export const createTinyliciousCreateNewRequest =
(documentId: string): IRequest=> (
{
url: documentId,
headers:{
createNew: true,
},
}
);
Then, you just run this React app standalone instead of concurrently, and without the built-in Tinylicious server.
Go to their GitHub, clone their Tinylicious in the FluidFramework/server repo, and run it in whatever port you want.
And here you go, now you can run Tinylicious in any host, any port you wanted.
The Tinylicious port is now configurable. More details in https://github.com/microsoft/FluidFramework/issues/5415

How to suppress console output in Tesseract.js?

Tesseract.js seems to print to the console with every call to .recognize(), even with no option parameters attached.
It seems possible to quiet the output with the Tesseract CLI by using the "quiet" flag, but I can't find anything like that for Tesseract.js.
I've scanned through the parameters that could be passed to "options" as found on the Tesseract.js repository:
https://github.com/naptha/tesseract.js/blob/master/docs/tesseract_parameters.md
I've tried setting everything that has to do with "DEBUG" to 0, and I've tried sending the output to a "debug_file" parameter, but nothing I do seems to change the console output.
Here's a basic example with no parameters on the "options" object:
const fs = require('fs');
const Tesseract = require('tesseract.js');
const image = fs.readFileSync('path/to/image.jpg');
const options = {};
Tesseract.recognize(image, options)
.finally((resultOrError) => {
Tesseract.terminate();
}
);
I would expect there to be no output at all here, but instead this gets printed:
pre-main prep time: 76 ms
{ text: '',
html: '<div class=\'ocr_page\' id=\'page_1\' title=\'image ""; bbox 0 0 600 80; ppageno 0\'>\n</div>\n',
confidence: 0,
blocks: [],
psm: 'SINGLE_BLOCK',
oem: 'DEFAULT',
version: '3.04.00',
paragraphs: [],
lines: [],
words: [],
symbols: [] }
UPDATE
Okay, okay. It's early in the morning, I could have tried a little harder here. It looks like Tesseract.js automatically dumps everything to the console if you don't make calls to .catch() and .then(). With the example below, most of the console output disappears.
const fs = require('fs');
const Tesseract = require('tesseract.js');
const image = fs.readFileSync('path/to/image.jpg');
const options = {};
const doSomethingWithResult = (result) => { result };
const doSomethingWithError = (error) => { error };
Tesseract.recognize(image, options)
.then(result => doSomethingWithResult(result))
.catch(err => doSomethingWithError(err))
.finally((resultOrError) => {
Tesseract.terminate();
}
);
Now, only this gets printed to the console:
pre-main prep time: 66 ms
I'd still like a way to suppress this, so I'm going to leave the question unanswered for now. I hope someone can chime in with a suggestion.

How to fix Error: Not implemented: navigation (except hash changes)

I am implementing unit test for a file that contain window.location.href and I need to check it.
My jest version is 22.0.4. Everything is fine when I run my test on node version >=10
But I get this error when I run it on v8.9.3
console.error node_modules/jsdom/lib/jsdom/virtual-console.js:29
Error: Not implemented: navigation (except hash changes)
I have no idea about it. I have searched on many page to find out the solution or any hint about this to figure out what happened here.
[UPDATE] - I took a look deep to source code and I think this error is from jsdom.
at module.exports (webapp/node_modules/jsdom/lib/jsdom/browser/not-implemented.js:9:17)
at navigateFetch (webapp/node_modules/jsdom/lib/jsdom/living/window/navigation.js:74:3)
navigation.js file
exports.evaluateJavaScriptURL = (window, urlRecord) => {
const urlString = whatwgURL.serializeURL(urlRecord);
const scriptSource = whatwgURL.percentDecode(Buffer.from(urlString)).toString();
if (window._runScripts === "dangerously") {
try {
return window.eval(scriptSource);
} catch (e) {
reportException(window, e, urlString);
}
}
return undefined;
};
exports.navigate = (window, newURL, flags) => {
// This is NOT a spec-compliant implementation of navigation in any way. It implements a few selective steps that
// are nice for jsdom users, regarding hash changes and JavaScript URLs. Full navigation support is being worked on
// and will likely require some additional hooks to be implemented.
const document = idlUtils.implForWrapper(window._document);
const currentURL = document._URL;
if (!flags.reloadTriggered && urlEquals(currentURL, newURL, { excludeFragments: true })) {
if (newURL.fragment !== currentURL.fragment) {
navigateToFragment(window, newURL, flags);
}
return;
}
// NOT IMPLEMENTED: Prompt to unload the active document of browsingContext.
// NOT IMPLEMENTED: form submission algorithm
// const navigationType = 'other';
// NOT IMPLEMENTED: if resource is a response...
if (newURL.scheme === "javascript") {
window.setTimeout(() => {
const result = exports.evaluateJavaScriptURL(window, newURL);
if (typeof result === "string") {
notImplemented("string results from 'javascript:' URLs", window);
}
}, 0);
return;
}
navigateFetch(window);
};
not-implemented.js
module.exports = function (nameForErrorMessage, window) {
if (!window) {
// Do nothing for window-less documents.
return;
}
const error = new Error(`Not implemented: ${nameForErrorMessage}`);
error.type = "not implemented";
window._virtualConsole.emit("jsdomError", error);
};
I see some weird logics in these file.
const scriptSource = whatwgURL.percentDecode(Buffer.from(urlString)).toString();
then check string and return error
Alternative version that worked for me with jest only:
let assignMock = jest.fn();
delete window.location;
window.location = { assign: assignMock };
afterEach(() => {
assignMock.mockClear();
});
Reference:
https://remarkablemark.org/blog/2018/11/17/mock-window-location/
Alternate solution: You could mock the location object
const mockResponse = jest.fn();
Object.defineProperty(window, 'location', {
value: {
hash: {
endsWith: mockResponse,
includes: mockResponse,
},
assign: mockResponse,
},
writable: true,
});
I faced a similar issue in one of my unit tests. Here's what I did to resolve it.
Replace window.location.href with window.location.assign(url) OR
window.location.replace(url)
JSDOM will still complain about window.location.assign not implemented.
Error: Not implemented: navigation (except hash changes)
Then, in one of your unit tests for the above component / function containing window.assign(url) or window.replace(url) define the following
sinon.stub(window.location, 'assign');
sinon.stub(window.location, 'replace');
Make sure you import sinon import sinon from 'sinon';
Hopefully, this should fix the issue for you as it did for me.
The reason JSDOM complains about the Error: Not implemented: navigation (except hash changes) is because JSDOM does not implement methods like window.alert, window.location.assign, etc.
References:
http://xxd3vin.github.io/2018/03/13/error-not-implemented-navigation-except-hash-changes.html
https://www.npmjs.com/package/jsdom#virtual-consoles
You can use jest-location-mock package for that
Usage example with CRA (create-react-app)
// src/setupTests.ts
import "jest-location-mock";
I found a good reference that explain and solve the problem: https://remarkablemark.org/blog/2018/11/17/mock-window-location/
Because the tests are running under Node, it can't understand window.location, so we need to mock the function:
Ex:
delete window.location;
window.location = { reload: jest.fn() }
Following worked for me:
const originalHref = window.location.href;
afterEach(() => {
window.history.replaceState({}, "", decodeURIComponent(originalHref));
});
happy coding :)
it('test', () => {
const { open } = window;
delete window.open;
window.open = jest.fn();
jest.spyOn(window, 'open');
// then call the function that's calling the window
expect(window.open).toHaveBeenCalled();
window.open = open;
});

#sentry/node integration to wrap bunyan log calls as breadcrumbs

Sentry by defaults has integration for console.log to make it part of breadcrumbs:
Link: Import name: Sentry.Integrations.Console
How can we make it to work for bunyan logger as well, like:
const koa = require('koa');
const app = new koa();
const bunyan = require('bunyan');
const log = bunyan.createLogger({
name: 'app',
..... other settings go here ....
});
const Sentry = require('#sentry/node');
Sentry.init({
dsn: MY_DSN_HERE,
integrations: integrations => {
// should anything be handled here & how?
return [...integrations];
},
release: 'xxxx-xx-xx'
});
app.on('error', (err) => {
Sentry.captureException(err);
});
// I am trying all to be part of sentry breadcrumbs
// but only console.log('foo'); is working
console.log('foo');
log.info('bar');
log.warn('baz');
log.debug('any');
log.error('many');
throw new Error('help!');
P.S. I have already tried bunyan-sentry-stream but no success with #sentry/node, it just pushes entries instead of treating them as breadcrumbs.
Bunyan supports custom streams, and those streams are just function calls. See https://github.com/trentm/node-bunyan#streams
Below is an example custom stream that simply writes to the console. It would be straight forward to use this example to instead write to the Sentry module, likely calling Sentry.addBreadcrumb({}) or similar function.
Please note though that the variable record in my example below is a JSON string, so you would likely want to parse it to get the log level, message, and other data out of it for submission to Sentry.
{
level: 'debug',
stream:
(function () {
return {
write: function(record) {
console.log('Hello: ' + record);
}
}
})()
}

Categories