Fetching multiple raw md files from GitHub in React JS - javascript

I'm trying to fetch data from multiple raw .md files from Guthub repo. Currently I'm able to fetch only one, yet I need to get to all of them.
I have a github repo and Im looking to fetch data from raw .md file, which is not a problem. The problem is that the repo has bunch of folders and each folder has its own .md file. I need to make some sort of map through all folders and fetch all of the .md files.
Lets say I have a github repo with following folders:
folder1 -> text1.md
folder2 -> text2.md
folder3 -> text3.md
I'm currently being able to fetch only one raw md usinng the following method
let fetchData = () => {
axios.get("https://raw.githubusercontent.com/user-name/repo-name/master/folder1/text1.md").then(response => {
console.log(response)
}).catch(error => {
console.log(error)
})
}
My goal is to fetch all text1, text2, text3.md so I can map through them and display in the table

Based on your comments, I would say that your best bet is to make a node worker that you can run weekly (o during deployments) that would crawl information form folders (filename and content) you tell pass it to, and then saved that information in some way you can later consume from gatsby (I guess the ideal way would be to put it on gatsby GraphQL).
This is a vague idea on how that worker could be, with the too limited information I have:
let repoBaseUrl = 'https://raw.githubusercontent.com/user-name/repo-name/master/';
let folders = [
'folder1',
'folder2',
'folder3'
];
let fetchFileName = async (folder) => {
// Your function to get the filename
return filename;
}
let fetchFileContent = async (folder, filename) => {
try {
const response = await axios.get(`${repoBaseUrl}${folder}${filename}`);
return response.data;
}
catch(error) {
// do something with the error
}
}
let fetchFolderContent = async () => {
const data = {};
folders.forEach(async (folder) => {
const filename = await fetchFileName(folder);
const content = await fetchFileContent(folder, filename);
data[folder] = {
filename,
content,
}
});
return data;
}
let main = async () => {
const data = await fetchFolderContent();
// Process your data
// IE: save it GraphQL so you can consume it from Gatsbt
}
main();

Related

How to correctly display images on React frontend that are returned by Laravel as StreamedResponse objects?

The idea is as follows:
Images/documents are stored privately on the server
A logged-in user on frontend clicks a button which sends an axios request to backend to get an aggregated result of ModelA from TableA and it's associated attachment file list from TableB
For each ModelA, numerous requests are made to endpoint to fetch images which are returned as \Symfony\Component\HttpFoundation\StreamedResponse via Storage::download($request->file_name)
This works in the sense that files are returned.
Note - I tried attaching all files to response in step 2 but this didn't work, so added the extra step to get file list and get individual files after that based on the list. This might kill the webserver if the amount of requests becomes too high, so would appreciate any advise on a different approach.
The problem
How to display the files in React and is this the right approach at all considering potential performance issues noted above?
I've tried the following:
Create an octet-stream url link with FileReader but these wouldn't display and had the same url despite await being used for the reader.readAsDataURL(blob) function:
const { email, name, message, files } = props
const [previews, setPreviews] = useState<string[]>([])
const { attachments } = useAttachment(files)
useEffect(() => {
const p = previews
files && attachments?.forEach(async filename => {
const reader = new FileReader()
reader.onloadend = () => {
p.push(reader.result as string)
setPreviews(p)
}
const blob = new Blob([filename])
await reader.readAsDataURL(blob)
})
}, [files, attachments, previews])
Create src attributes with URL.createObjectURL() but these, although generated and unique, wouldn't display when used in an <img /> tag:
useEffect(() => {
const p = previews
files && attachments?.forEach(filename => {
const blob = new Blob([filename])
const src = URL.createObjectURL(blob)
p.push(src)
setPreviews(p)
})
}, [files, attachments, previews])
Results example:
<img src="blob:http://127.0.0.1:8000/791f5efb-1b4e-4474-a4b6-d7b14b881c28" class="chakra-image css-0">
<img src="blob:http://127.0.0.1:8000/3d93449e-175d-49af-9a7e-61de3669817c" class="chakra-image css-0">
Here's the useAttachment hook:
import { useEffect, useState } from 'react'
import { api } from '#utils/useAxios'
const useAttachment = (files: any[] | undefined) => {
const [attachments, setAttachments] = useState<any[]>([])
const handleRequest = async (data: FormData) => {
await api().post('api/attachment', data).then(resp => {
const attach = attachments
attach.push(resp)
setAttachments(attach)
})
}
useEffect(() => {
if (files) {
files.forEach(async att => {
const formData = new FormData()
formData.append('file_name', att.file_name)
await handleRequest(formData)
})
}
}, [files, attachments])
return { attachments }
}
export default useAttachment
Try Storage::response(). This is the same as Storage::download() just that it sets the Content-Disposition header to inline instead of attachment.
This tells the browser to display it instead of downloading it. See MDN Docs Here
Then you can use it as the src for an <img/>.
Solved it by sending the files in a single response but encoded with base64encode(Storage::get('filename')). Then, on the frontend, it was as simple as:
const base64string = 'stringReturned'
<img> src={`data:image/png;base64,${base64string}`}</img>```

amazon s3.upload is taking time

I am trying to upload file to s3, before that I am altering the name of the file. Now I am accepting 2 files from request form-data object, renaming the filename, and uploading the file to s3. And end of the task I need to return the renamed file list which is uploaded successfully.
I am using S3.upload() function. But the problem is, the variable which is assigned as empty array initially, that will contain the renamed file list. But the array is returning empty response. The s3.upload() is taking much time. is there any probable solution where I can store the file name if upload is successful and return those names in response.
Please help me to fix this. The code looks like this,
if (formObject.files.document && formObject.files.document.length > 0) {
const circleCode = formObject.fields.circleCode[0];
let collectedKeysFromAwsResponse = [];
formObject.files.document.forEach(e => {
const extractFileExtension = ".pdf";
if (_.has(FILE_EXTENSIONS_INCLUDED, _.lowerCase(extractFileExtension))) {
console.log(e);
//change the filename
const originalFileNameCleaned = "cleaning name logic";
const _id = mongoose.Types.ObjectId();
const s3FileName = "s3-filename-convension;
console.log(e.path, "", s3FileName);
const awsResponse = new File().uploadFileOnS3(e.path, s3FileName);
if(e.hasOwnProperty('ETag')) {
collectedKeysFromAwsResponse.push(awsResponse.key.split("/")[1])
}
}
});
};
use await s3.upload(params).promise(); is the solution.
Use the latest code - which is AWS SDK for JavaScript V3. Here is the code you should be using
// Import required AWS SDK clients and commands for Node.js.
import { PutObjectCommand } from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon S3 service client module.
import {path} from "path";
import {fs} from "fs";
const file = "OBJECT_PATH_AND_NAME"; // Path to and name of object. For example '../myFiles/index.js'.
const fileStream = fs.createReadStream(file);
// Set the parameters
export const uploadParams = {
Bucket: "BUCKET_NAME",
// Add the required 'Key' parameter using the 'path' module.
Key: path.basename(file),
// Add the required 'Body' parameter
Body: fileStream,
};
// Upload file to specified bucket.
export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(uploadParams));
console.log("Success", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();
More details can be found in the AWS JavaScript V3 DEV Guide.

how to copy an image and save it in a new folder in electron

I am trying to make an image organizer app , which searches images using tag's ,
So I want the user to select the image they want, so far I have done this by the following code
// renderer process
$("#uploadImage).on("click", (e) => {
ipcRenderer.send('dialoguploadImage')
});
this is the main process
ipcMain.on('dialoguploadImage', (e) => {
dialog.showOpenDialog({
properties: ['openFile']
}).then(result => {
sendBackimagePathFromMain(result.filePaths[0])
}).
catch(err => {
console.log(err)
})
});
function sendBackimagePathFromMain(result) {
mainWindow.webContents.send('imagePathFromMain',result)
}
so I have the image path, and the only thing I want to know is
how can I duplicate this image, rename it, cerate a new folder and save the image in that folder
like for example to this folder
('./currentDirectory/imageBackup/dognothapppy.jpg')
You can use fs.mkdirSync() to make the folder and fs.copyFileSync() to 'duplicate and rename' the file (in a file system, you don't need to duplicate and rename a file in two different steps, you do both at once, which is copying a file), or their async functions.
const { mkdirSync, copyFileSync } = require('fs')
const { join } = require('path')
const folderToCreate = 'folder'
const fileToCopy = 'selectedFile.txt'
const newFileName = 'newFile.txt'
const dest = join(folderToCreate, newFileName)
mkdirSync(folderToCreate)
copyFileSync(fileToCopy, dest)

How to upload a file into Firebase Storage from a callable https cloud function

I have been trying to upload a file to Firebase storage using a callable firebase cloud function.
All i am doing is fetching an image from an URL using axios and trying to upload to storage.
The problem i am facing is, I don't know how to save the response from axios and upload it to storage.
First , how to save the received file in the temp directory that os.tmpdir() creates.
Then how to upload it into storage.
Here i am receiving the data as arraybuffer and then converting it to Blob and trying to upload it.
Here is my code. I have been missing a major part i think.
If there is a better way, please recommend me. Ive been looking through a lot of documentation, and landed up with no clear solution. Please guide. Thanks in advance.
const bucket = admin.storage().bucket();
const path = require('path');
const os = require('os');
const fs = require('fs');
module.exports = functions.https.onCall((data, context) => {
try {
return new Promise((resolve, reject) => {
const {
imageFiles,
companyPIN,
projectId
} = data;
const filename = imageFiles[0].replace(/^.*[\\\/]/, '');
const filePath = `ProjectPlans/${companyPIN}/${projectId}/images/${filename}`; // Path i am trying to upload in FIrebase storage
const tempFilePath = path.join(os.tmpdir(), filename);
const metadata = {
contentType: 'application/image'
};
axios
.get(imageFiles[0], { // URL for the image
responseType: 'arraybuffer',
headers: {
accept: 'application/image'
}
})
.then(response => {
console.log(response);
const blobObj = new Blob([response.data], {
type: 'application/image'
});
return blobObj;
})
.then(async blobObj => {
return bucket.upload(blobObj, {
destination: tempFilePath // Here i am wrong.. How to set the path of downloaded blob file
});
}).then(buffer => {
resolve({ result: 'success' });
})
.catch(ex => {
console.error(ex);
});
});
} catch (error) {
// unknown: 500 Internal Server Error
throw new functions.https.HttpsError('unknown', 'Unknown error occurred. Contact the administrator.');
}
});
I'd take a slightly different approach and avoid using the local filesystem at all, since its just tmpfs and will cost you memory that your function is using anyway to hold the buffer/blob, so its simpler to just avoid it and write directly from that buffer to GCS using the save method on the GCS file object.
Here's an example. I've simplified out a lot of your setup, and I am using an http function instead of a callable. Likewise, I'm using a public stackoverflow image and not your original urls. In any case, you should be able to use this template to modify back to what you need (e.g. change the prototype and remove the http response and replace it with the return value you need):
const functions = require('firebase-functions');
const axios = require('axios');
const admin = require('firebase-admin');
admin.initializeApp();
exports.doIt = functions.https.onRequest((request, response) => {
const bucket = admin.storage().bucket();
const IMAGE_URL = 'https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.svg';
const MIME_TYPE = 'image/svg+xml';
return axios.get(IMAGE_URL, { // URL for the image
responseType: 'arraybuffer',
headers: {
accept: MIME_TYPE
}
}).then(response => {
console.log(response); // only to show we got the data for debugging
const destinationFile = bucket.file('my-stackoverflow-logo.svg');
return destinationFile.save(response.data).then(() => { // note: defaults to resumable upload
return destinationFile.setMetadata({ contentType: MIME_TYPE });
});
}).then(() => { response.send('ok'); })
.catch((err) => { console.log(err); })
});
As a commenter noted, in the above example the axios request itself makes an external network access, and you will need to be on the Blaze or Flame plan for that. However, that alone doesn't appear to be your current problem.
Likewise, this also defaults to using a resumable upload, which the documentation does not recommend when you are doing large numbers of small (<10MB files) as there is some overhead.
You asked how this might be used to download multiple files. Here is one approach. First, lets assume you have a function that returns a promise that downloads a single file given its filename (I've abridged this from the above but its basically identical except for the change of INPUT_URL to filename -- note that it does not return a final result such as response.send(), and there's sort of an implicit assumption all the files are the same MIME_TYPE):
function downloadOneFile(filename) {
const bucket = admin.storage().bucket();
const MIME_TYPE = 'image/svg+xml';
return axios.get(filename, ...)
.then(response => {
const destinationFile = ...
});
}
Then, you just need to iteratively build a promise chain from the list of files. Lets say they are in imageUrls. Once built, return the entire chain:
let finalPromise = Promise.resolve();
imageUrls.forEach((item) => { finalPromise = finalPromise.then(() => downloadOneFile(item)); });
// if needed, add a final .then() section for the actual function result
return finalPromise.catch((err) => { console.log(err) });
Note that you could also build an array of the promises and pass them to Promise.all() -- that would likely be faster as you would get some parallelism, but I wouldn't recommend that unless you are very sure all of the data will fit inside the memory of your function at once. Even with this approach, you need to make sure the downloads can all complete within your function's timeout.

Why 'fs' only persists file changes after end of program?

I have an application that persists its state on disk, when any state change occur it reads from file the old state, it changes the state on memory and persists on disk again. But, the problem is that store function is writing on disk only after close program. I don't know why?
const load = (filePath) => {
const fileBuffer = fs.readFileSync(
filePath, "utf8"
);
return JSON.parse(fileBuffer);
}
const store = (filePath, data) => {
const contentString = JSON.stringify(data);
fs.writeFileSync(filePath, contentString);
}
To create a complete example, let's use load-dataset command, in the file "src/interpreter/index.js".
while(this.isRunning) {
readLineSync.promptCL({
"load-dataset": async (type, name, from) => {
await loadDataset({type, name, from});
},
...
}, {
limit: null,
});
}
In general, this calls loadDatasets, which reads json ou csv files.
export const loadDataset = async (options) => {
switch(options.type) {
case "csv":
await readCSVFile(options.from)
.then(data => {
app.createDataset(options.name, data);
});
break;
case "json":
const data = readJSONFile(options.from);
app.createDataset(options.name, data);
break;
}
}
The method createDataset() read the file on disk, update it and write again.
createDataset(name, data) {
const state = loadState();
state.datasets = [
...state.datasets,
{name, size: data.length}
];
storeState(state);
const file = loadDataset();
file.datasets = [
...file.datasets,
{name, data}
];
storeDataset(file);
}
Where methods loadState(), storeState(), loadDataset(), storeDataset() uses initial methods.
const loadState = () =>
load(stateFilePath);
const storeState = state =>
store(stateFilePath, state);
...
const loadDataset = () =>
load(datasetFilePath);
const storeDataset = dataset =>
store(datasetFilePath, dataset);
I'm using a package from npm called readline-sync to create a simple "terminal", I don't know if it causes some conflicts.
The source code is in the Github: Git repo. In the file "index.js", the method createDataset() calls loadState() and storeState(), which both uses the methods showed above.
The package readline-sync is used in the interpreter, here Interpreter file, which basic loops until exit command.
Just as note, I'm using Ubuntu 18.04.2 and Node.js 10.15.0. To make this code I saw an example, in the YouTube Video. This guy is using a MAC OS X, I really hope that the system won't be problem.

Categories