Sveltekit & Fleek (IPFS) import syntax problem? - javascript

I have managed to use fleek to update IPFS via straight javascript. I am now trying to add this functionality to a clean install of a svelteKit app. I think I am having trouble with the syntax around imports, but am not sure what I am doing wrong. When I click the button on the index.svelte I get the following error
Uncaught ReferenceError: require is not defined
uploadIPFS upload.js:3
listen index.mjs:412..........(I truncated the error here)
A few thoughts
I am wondering if it could be working in javascript because it is being called in node (running on the server) but running on the client in svelte?
More Details
The index.svelte file looks like this
<script>
import {uploadIPFS} from '../IPFS/upload'
</script>
<button on:click={uploadIPFS}>
upload to ipfs
</button>
the upload.js file looks like this
export const uploadIPFS = () => {
const fleek = require('#fleekhq/fleek-storage-js');
const apiKey = 'cZsQh9XV5+6Nd1+Bou4OuA==';
const apiSecret = '';
const data = 'pauls test load';
const testFunctionUpload = async (data) => {
const date = new Date();
const timestamp = date.getTime();
const input = {
apiKey,
apiSecret,
key: `file-${timestamp}`,
data
};
try {
const result = await fleek.upload(input);
console.log(result);
} catch (e) {
console.log('error', e);
}
};
testFunctionUpload(data);
};
I have also tried using the other import syntax and when I do I get the following error
500
global is not defined....
import with the other syntax is
import fleekStorage from '#fleekhq/fleek-storage-js';
function uploadIPFS() {
console.log('fleekStorage',fleekStorage)
};
export default uploadIPFS;
*I erased the api secret in the code above. In future I will store these in a .env file.
Even more details (if you need them)
The file below will update IPFS and runs via the command
npm run upload
That file is below. For my version that I used in svelte I simplified the file by removing all the file management and just loading a variable instead of a file (as in the example below)
const fs = require('fs');
const path = require('path');
const fleek = require('#fleekhq/fleek-storage-js');
require('dotenv').config()
const apiKey = process.env.FLEEK_API_KEY;
const apiSecret = process.env.FLEEK_API_SECRET;
const testFunctionUpload = async (data) => {
const date = new Date();
const timestamp = date.getTime();
const input = {
apiKey,
apiSecret,
key: `file-${timestamp}`,
data,
};
try {
const result = await fleek.upload(input);
console.log(result);
} catch(e) {
console.log('error', e);
}
}
// File management not used a my svelte version to keep it simple
const filePath = path.join(__dirname, 'README.md');
fs.readFile(filePath, (err, data) => {
if(!err) {
testFunctionUpload(data);
}
})

Related

amazon s3.upload is taking time

I am trying to upload file to s3, before that I am altering the name of the file. Now I am accepting 2 files from request form-data object, renaming the filename, and uploading the file to s3. And end of the task I need to return the renamed file list which is uploaded successfully.
I am using S3.upload() function. But the problem is, the variable which is assigned as empty array initially, that will contain the renamed file list. But the array is returning empty response. The s3.upload() is taking much time. is there any probable solution where I can store the file name if upload is successful and return those names in response.
Please help me to fix this. The code looks like this,
if (formObject.files.document && formObject.files.document.length > 0) {
const circleCode = formObject.fields.circleCode[0];
let collectedKeysFromAwsResponse = [];
formObject.files.document.forEach(e => {
const extractFileExtension = ".pdf";
if (_.has(FILE_EXTENSIONS_INCLUDED, _.lowerCase(extractFileExtension))) {
console.log(e);
//change the filename
const originalFileNameCleaned = "cleaning name logic";
const _id = mongoose.Types.ObjectId();
const s3FileName = "s3-filename-convension;
console.log(e.path, "", s3FileName);
const awsResponse = new File().uploadFileOnS3(e.path, s3FileName);
if(e.hasOwnProperty('ETag')) {
collectedKeysFromAwsResponse.push(awsResponse.key.split("/")[1])
}
}
});
};
use await s3.upload(params).promise(); is the solution.
Use the latest code - which is AWS SDK for JavaScript V3. Here is the code you should be using
// Import required AWS SDK clients and commands for Node.js.
import { PutObjectCommand } from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon S3 service client module.
import {path} from "path";
import {fs} from "fs";
const file = "OBJECT_PATH_AND_NAME"; // Path to and name of object. For example '../myFiles/index.js'.
const fileStream = fs.createReadStream(file);
// Set the parameters
export const uploadParams = {
Bucket: "BUCKET_NAME",
// Add the required 'Key' parameter using the 'path' module.
Key: path.basename(file),
// Add the required 'Body' parameter
Body: fileStream,
};
// Upload file to specified bucket.
export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(uploadParams));
console.log("Success", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();
More details can be found in the AWS JavaScript V3 DEV Guide.

How To Unit Test Entry Point Node.js File With Jest

I have an entry point in my app that is executed via npm start. I'd like to run some tests on this script with Jest, but cannot figure out how I should do it. The script automatically runs, so if I import it into a Jest file, I can't call it individually in my test blocks like:
const entryPoint = require('./entry-point')
test('something', () => {
entryPoint()
})
The code will already execute before it reaches any of the test blocks.
The code for the entry point is here:
const fs = require("fs");
const summarizeData = require("./summarize-data");
try {
const fileName = process.argv[2];
if (!fileName) {
throw Error("Please enter a file name. ex: npm start <filename>");
}
if (!fs.existsSync(`${fileName}.json`)) {
throw Error(`The file ${fileName}.json could not be found.`);
}
const jsonParsed = JSON.parse(fs.readFileSync(`${fileName}.json`, "utf8"));
const data = summarizeData(jsonParsed);
console.log(data);
} catch (error) {
throw Error(error);
}
I think it will be enough to unit test summarizeData function.
Something like this (using shouldJS for assertions):
const should = require('should');
const fileName = 'testData';
test('it summarize data properly', () => {
const jsonParsed = JSON.parse(fs.readFileSync(`${fileName}.json`, "utf8"));
const dataSummarized = summarizeData(jsonParsed);
dataSummarazied.something.should.be.equal(1); // TODO - add other assertions to cover all result
})

Why 'fs' only persists file changes after end of program?

I have an application that persists its state on disk, when any state change occur it reads from file the old state, it changes the state on memory and persists on disk again. But, the problem is that store function is writing on disk only after close program. I don't know why?
const load = (filePath) => {
const fileBuffer = fs.readFileSync(
filePath, "utf8"
);
return JSON.parse(fileBuffer);
}
const store = (filePath, data) => {
const contentString = JSON.stringify(data);
fs.writeFileSync(filePath, contentString);
}
To create a complete example, let's use load-dataset command, in the file "src/interpreter/index.js".
while(this.isRunning) {
readLineSync.promptCL({
"load-dataset": async (type, name, from) => {
await loadDataset({type, name, from});
},
...
}, {
limit: null,
});
}
In general, this calls loadDatasets, which reads json ou csv files.
export const loadDataset = async (options) => {
switch(options.type) {
case "csv":
await readCSVFile(options.from)
.then(data => {
app.createDataset(options.name, data);
});
break;
case "json":
const data = readJSONFile(options.from);
app.createDataset(options.name, data);
break;
}
}
The method createDataset() read the file on disk, update it and write again.
createDataset(name, data) {
const state = loadState();
state.datasets = [
...state.datasets,
{name, size: data.length}
];
storeState(state);
const file = loadDataset();
file.datasets = [
...file.datasets,
{name, data}
];
storeDataset(file);
}
Where methods loadState(), storeState(), loadDataset(), storeDataset() uses initial methods.
const loadState = () =>
load(stateFilePath);
const storeState = state =>
store(stateFilePath, state);
...
const loadDataset = () =>
load(datasetFilePath);
const storeDataset = dataset =>
store(datasetFilePath, dataset);
I'm using a package from npm called readline-sync to create a simple "terminal", I don't know if it causes some conflicts.
The source code is in the Github: Git repo. In the file "index.js", the method createDataset() calls loadState() and storeState(), which both uses the methods showed above.
The package readline-sync is used in the interpreter, here Interpreter file, which basic loops until exit command.
Just as note, I'm using Ubuntu 18.04.2 and Node.js 10.15.0. To make this code I saw an example, in the YouTube Video. This guy is using a MAC OS X, I really hope that the system won't be problem.

Outputting file details using ffprobe in ffmpeg AWS Lambda layer

I am trying to output the details of an audio file with ffmpeg using the ffprobe option. But it is just returning 'null' at the moment? I have added the ffmpeg layer in Lambda. can anyone spot why this is not working?
const { spawnSync } = require("child_process");
const { readFileSync, writeFileSync, unlinkSync } = require("fs");
const util = require('util');
var fs = require('fs');
let path = require("path");
exports.handler = (event, context, callback) => {
spawnSync(
"/opt/bin/ffprobe",
[
`var/task/myaudio.flac`
],
{ stdio: "inherit" }
);
};
This is the official AWS Lambda layer I am using, it is a great prooject but a little lacking in documentation.
https://github.com/serverlesspub/ffmpeg-aws-lambda-layer
First of all, I would recommend using NodeJS 8.10 over NodeJs 6.10 (it will be soon EOL, although AWS is unclear on how long it will be supported)
Also, I would not use the old style handler with a callback.
A working example below - since it downloads a file from the internet (couldn't be bothered to create a package to deploy on lambda with the file uploaded) give it a bit more time to work.
const { spawnSync } = require('child_process');
const util = require('util');
var fs = require('fs');
let path = require('path');
const https = require('https');
exports.handler = async (event) => {
const source_url = 'https://upload.wikimedia.org/wikipedia/commons/b/b2/Bell-ring.flac'
const target_path = '/tmp/test.flac'
async function downloadFile() {
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(target_path);
const request = https.get(source_url, function(response) {
const stream = response.pipe(file)
stream.on('finish', () => {resolve()})
});
});
}
await downloadFile()
const test = spawnSync('/opt/bin/ffprobe',[
target_path
]);
console.log(test.output.toString('utf8'))
const response = {
statusCode: 200,
body: JSON.stringify([test.output.toString('utf8')]),
};
return response;
}
NB! In production be sure to generate a unique temporary file as instances that the Lambda function run on are often shared from invocation to invocation, you don't want multiple invocations stepping on each others files! When done, delete the temporary file, otherwise you might run out of free space on the instance executing your functions. The /tmp folder can hold 512MB, so it can run out fast if you work with many large flac files
I'm not fully familiar with this layer, however from looking at the git repo of the thumbnail-builder it looks like child_process is a promise, so you should be waiting for it's result using .then(), otherwise it is returning null because it doesn't wait for the result.
So try something like:
return spawnSync(
"/opt/bin/ffprobe",
[
`var/task/myaudio.flac`
],
{ stdio: "inherit" }
).then(result => {
return result;
})
.catch(error => {
//handle error
});

Firebase cloud functions for crashlytics are not triggering

So we have a project, in which Crashlytics and analytics are set-up and currently functioning. However I am not able to successfully implement the cloud functions for the three triggers found here : Crashlytics Events.
While testing using other cloud functions such as when there are read/write operations on the database, the functions execute correctly. When deploying the functions folder to firebase, I get no errors in regards to the triggers and the code is very similar to the samples on Github. I have made sure that the SDK is up to date and that I have run npm install in the functions folder for any dependencies.
Here is the JS file:
'use strict';
const functions = require('firebase-functions');
const rp = require('request-promise');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
// Helper function that posts to Slack about the new issue
const notifySlack = (slackMessage) => {
// See https://api.slack.com/docs/message-formatting on how
// to customize the message payload
return rp({
method: 'POST',
uri: functions.config().slack.webhook_url,
body: {
text: slackMessage,
},
json: true,
});
};
exports.postOnNewIssue = functions.crashlytics.issue().onNewDetected((event) => {
const data = event.data;
const issueId = data.issueId;
const issueTitle = data.issueTitle;
const appName = data.appInfo.appName;
const appPlatform = data.appInfo.appPlatform;
const latestAppVersion = data.appInfo.latestAppVersion;
const slackMessage = `<!here|here> There is a new issue - ${issueTitle} (${issueId}) ` +
`in ${appName}, version ${latestAppVersion} on ${appPlatform}`;
return notifySlack(slackMessage).then(() => {
return console.log(`Posted new issue ${issueId} successfully to Slack`);
});
});
exports.postOnRegressedIssue = functions.crashlytics.issue().onRegressed((event) => {
const data = event.data;
const issueId = data.issueId;
const issueTitle = data.issueTitle;
const appName = data.appInfo.appName;
const appPlatform = data.appInfo.appPlatform;
const latestAppVersion = data.appInfo.latestAppVersion;
const resolvedTime = data.resolvedTime;
const slackMessage = `<!here|here> There is a regressed issue ${issueTitle} (${issueId}) ` +
`in ${appName}, version ${latestAppVersion} on ${appPlatform}. This issue was previously ` +
`resolved at ${new Date(resolvedTime).toString()}`;
return notifySlack(slackMessage).then(() => {
return console.log(`Posted regressed issue ${issueId} successfully to Slack`);
});
});
exports.postOnVelocityAlert = functions.crashlytics.issue().onVelocityAlert((event) => {
const data = event.data;
const issueId = data.issueId;
const issueTitle = data.issueTitle;
const appName = data.appInfo.appName;
const appPlatform = data.appInfo.appPlatform;
const latestAppVersion = data.appInfo.latestAppVersion;
const crashPercentage = data.velocityAlert.crashPercentage;
const slackMessage = `<!here|here> There is an issue ${issueTitle} (${issueId}) ` +
`in ${appName}, version ${latestAppVersion} on ${appPlatform} that is causing ` +
`${parseFloat(crashPercentage).toFixed(2)}% of all sessions to crash.`;
return notifySlack(slackMessage)/then(() => {
console.log(`Posted velocity alert ${issueId} successfully to Slack`);
});
});
When I tried to deploy Crashlytics events, I was greeted with the following error message.
⚠ functions: failed to update function crashlyticsOnRegressed
HTTP Error: 400, The request has errors
⚠ functions: failed to update function crashlyticsOnNew
HTTP Error: 400, The request has errors
⚠ functions: failed to update function crashlyticsOnVelocityAlert
HTTP Error: 400, The request has errors
Sure enough, Cloud Function documentation no longer lists Crashlytics, which used to be under the section Crashlytics triggers. Perhaps Google no longer supports it.
According to this issue, the crashlytics function triggers were deprecated and removed in release 3.13.3 of the firebase-functions SDK.

Categories