I have a blog type site that I've been working on, which is almost complete. I've used Eleventy and now I've hooked up Netlify CMS which required me to restructure some data.
Netlify CMS requires me to have separate files for the relation widget data, in this case authors. So I have an authors directory, which presently has 3 json files: jbloggs.json, etc, each object is flat, an example of one being:
./src/_data/authors/jbloggs.json
{
"key": "jbloggs",
"name": "Joe Bloggs",
// ... Removed for brevity
}
I initially created an array of objects and everything was working great:
./src/authors.json
[
{
"key": "jbloggs",
"name": "Joe Bloggs",
// ... Removed for brevity
},
{
"key": "user2",
"name": "User Two",
// ... Removed for brevity
}
]
What I need to do, is grab however many files are in my authors directory and add the objects from within each file to the array in my authors.json file. I've tried using json-concat, to no avail. I have fs and json-concat required in my eleventy file:
const files = [];
const {resolve} = require('path');
const source = resolve('./src/_data/authors');
const target = resolve('./src/authors.json');
fs.readdirSync(source).forEach((file) => {
files.push(file);
})
jsonConcat({ src: files, dest: target }, function (json) {
console.log(files); // returns an array of the correct filenames
console.log(target); // returns the path to the target file /Users/darrenlee/Desktop/WebApp/src/authors.json
console.log(json); // returns null
});
In my terminal I also get: [11ty] File changed: src/authors.json, but the file hasn't changed, I still only have 2 authors in there and the aim is to have all of the authors (currently 3) from the authors directory files.
Any help would be greatly appreciated.
Thank you
It turns out I was over-complicating things and I didn't need to add to the authors.json file at all.
In fact, I have now deleted that files as I just created a new collection:
eleventyConfig.addCollection("contributors", author => Object.values(author.items[0].data.contributors));
And now instead of calling my authors global data, I simply call collections.contributors in my Nunjucks templates. As always, there's probably a much cleaner way, but I have existing filters to show all guides by a chosen author and their bio on their posts etc.
It's working well, so for now it ships, maybe I'll try to replicate this without creating the extra collection at a later date, if I ever figure it out.
Related
I am working on a REACT JS project in an attempt to create a small Todo List app.
I have my data in a JSON file, currently hosted on jsonbin.io, in a format that looks like this...
{
"moduleAccess": {
"tasks": [
{
"email": "campbell#yahoo.com",
"id": 0,
"task_name": "Call mom",
"due_date": 44875,
"completed": true
},
{
"email": "palsner593#gmail.com",
"id": 1,
"task_name": "Buy eggs",
"due_date": 44880,
"completed": false
},
{
"email": "rob#gmail.com",
"id": 2,
"task_name": "Go to dog park",
"due_date": 44879,
"completed": false
}
]
}
}
Currently, I fetch the data using jsonbin.io's API. The data is brought into a variable called Tasks. If a user updates a specific to-do item, deletes a to-do item, or creates a new one, all those changes are put back into the Tasks variable. I can then send push those tasks to the server.
What I explained above works fine; however, the caveat is that I would like to allow multiple users to log in and then pull only the Todo items that pertain to their respective email.
Say, campbell#yahoo.com is logged in to my app. In this case, in my fetch pull request, I can specify that I would only like records with campbell#yahoo.com
async function loadData() {
const newPath = '$..tasks[?(#.email==' + campbell#yahoo.com + ')]';
console.log(newPath);
const url = 'https://api.jsonbin.io/v3/b/*binid*?meta=false'
const response = await
fetch(url, {
method: "GET",
headers: {
"X-Master-Key": key,
"X-JSON-Path": newPath
}
});
const data = await response.json();
setTasks([...data]); //or whatever
console.log(tasks);
}
This concept works as well. However, when pushing my task data back to a server after a user has made changes, I encounter an issue. The API I am using does not seem to allow parameters for specifying the JSON path upon PUSH. JSON-PATH is only allowed for a pull request. So when I push data to the server, it seems as if all JSON data will be overwritten, regardless of the user.
Does anybody have an alternative way to push/pull user-specific data? I am sorry if the detail I have provided is unnecessary. Not sure what the easiest way to approach this problem is for a react app.
Any help is appreciated. Thanks!
I did a little research in jsonbin.io API and came up with a solution that might work.
So I'm not really sure that this will work, but still.
When creating a new bin, you can add it to some collection using X-Collection-Id. So you might be able to make next flow:
When user register, create a separate bin for tasks for this user
Add user with bin id to some users collection where you will have all your users
When user auth, get his bin id using filters that you used in your code and store it for future use somewhere in your app.
After this you will be able to fetch users tasks by that bin id and modify it, cause now it will be a separate bin for each user and you can simply override all of its content.
Hope this works.
My app can successfully list and download (export text/plain) a file, from Google Drive to my local hard drive, using separate functions listFiles() & downloadFile() (extensively using code from Nodejs Quickstart
I am trying to combine this code into a function that will download (export text/plain) of all files listed in the folder.
At this point my file and path references are 'hard coded' in for testing (one file/one path)
So I am trying to understand how any modified listFiles() code could loop through the available list of files and provide the next available fileId as a reference for the downloadFiles() code. I also wanted to provide the matching fileName for path building.
In my listFiles() I cannot seem to find much information on parsing the returned promise data stream. So I've just got a dumb version going. Which can only download 1 file. (this is my understanding of the code)
/**
* Lists names and IDs of pageSize number of files (using query to define folder of files)
* #param {google.auth.OAuth2} auth An authorized OAuth2 client.
*/
function listFiles(auth) {
const drive = google.drive({version: 'v3', auth});
drive.files.list({
corpora: 'user',
pageSize: 1,
// files in a parent folder (drive>ocrTarget ID) that have not been trashed
q: `'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' in parents and trashed=false`,
fields: 'nextPageToken, files(id, name)',
}, (err, res) => {
if (err) return console.log('The API returned an error: ' + err);
const files = res.data.files;
if (files.length) {
files.map((file) => {
console.log(file);
});
} else {
console.log('No files found.');
}
});
}
Output
PS C:\Users\blah\blah\gDriveDev> node . myNodejsScript
{
id: 'xxxx file id of only file that could be listed with PageSize: 1 xxxxx',
name: 'Copy of 31832_226140__0001-00007'
}
I have gone through Google Drive for Developers Drive API (V3) docs (Guides/Reference) Which, cover request parameters. However I want to manipulate the data type/structure of output. eg walk the file list and parse the fileId.
(i)
Before I got my download code working properly I was creating metadata files of a JSON layout type. Now I have no idea how I did this. They were saved according to my file/path settings like this:.
{
"kind": "drive#file",
"id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"name": "atest.txt",
"mimeType": "text/plain"
}
(ii)
During experiments, running working code, I found that files.length has a value that equals the number of files listed in the available PageSize:
(iii)
For files.map((file) it looked like I was dealing with a Map object MDN Map Reference But error messages in my test code showed that it was not.
(iv)
I have seen the following type of code used for accessing parameters:
let data = 'Name,URL\n';
res.data.files.map(entry => {
const { name, webViewLink } = entry;
data += `${name},${webViewLink}\n`;
});
But I don't have the knowledge to interpret this in order to evaluate it for my situation.
If anyone can make a suggestion, for my situation, it would be appreciated.
-------------------- [Added to question] --------------------
To summarize I understand from the following:
const files = res.data.files;
if (files.length) {
files.map((file) => {
length property is the number of files
files produces list of length x length (x lists of x files)
file produces a list of the files (id, name)
How do I index these results? Do I have to read each of the file details from the list into an array?
Some examples of output here:
console.log(Object.keys(file)); // a list of file key types
output
[ 'id', 'name' ]
and
console.log(file);
eg
{
id: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
name: 'Copy of 31832_226140__0001-00007'
}
But the keys are not numbered. So there is no numerical reference (index) available.
My aim: read the full list of file names available and supply these for each file to be downloaded.
You can use a forEach() loop with an index
Sample:
const files = res.data.files;
var ids = [];
if (files.length) {
files.forEach(function (file, i) {
ids[i]= file.id;
})
}
Im making a notetaking app and Ive decided to store all the notes and structure in JSON file. On javascript, I get the JSON with AJAX, parse it and output it on the website.
My note structure is array of objects that can be nested, like this (if it is a note, it has a "content" attribute, if it is a folder, it has an array of objects (can be empty array too if the folder should me empty):
data {
entries = [
{
name: "Some note",
content: "This is a test note"
},
{
name: "folder",
children: [
{
name: "Bread recpie",
content: "Mix flour with water..."
},
{
name: "Soups",
children: [
{
name: "Pork soup",
content: "Add meat, onion..."
},
{
name: "Chicken soup"
content: "....."
}
]
}
]
}
]
}
To list the root directory, its simple, i just loop through the array as it only outputs the top-level records:
for (entry of data.entries) {
const li = document.createElement("li");
li.textContent = entry.name;
if (entry.children) {
li.className = "folder";
} else {
li.className = "file";
}
loop.appendChild(li);
}
But what about the folders? How should I proceed in listing the folders if the depth of nesting is unknown? And how do I target the specific folder? Should I add unique IDs to every object so i can filter the array with them? Or should I store some kind of depth information in a variable all the time?
You're making this more difficult for yourself by saving data to a JSON file. That is not a good approach. What you need to do is design a database schema appropriate for your data and create an API that outputs a predictable pattern of data that your client can work with.
I would suggest having a Folder resource and a Note resource linked through a one-to-many relationship. Each Folder resource can have many associated Note entries, but each Note has only one Folder that it is linked to. I suggest using an ORM, because most make it easy to eager load related data. For instance, if you choose Laravel you can use Eloquent, and then getting all notes for a folder is as easy as:
$folderWithNotes = Folder::with('notes')->where('name', 'school-notes')->get();
Knowing PHP is beside the point. You should still be able to see the logic of that.
If you create a database and build a server-side API to handle your data, you will end up with JSON on your client side that has a predictable format and is easy to work with.
I am making a discord bot in Node.js mostly for fun to get better at coding and i want the bot to push a string into an array and update the array file permanently.
I have been using separate .js files for my arrays such as this;
module.exports = [
"Map: Battlefield",
"Map: Final Destination",
"Map: Pokemon Stadium II",
];
and then calling them in my main file. Now i tried using .push() and it will add the desired string but only that one time.
What is the best solution to have an array i can update & save the inputs? apparently JSON files are good for this.
Thanks, Carl
congratulations on the idea of writing a bot in order to get some coding practice. I bet you will succeed with it!
I suggest you try to split your problem into small chunks, so it is going to be easier to reason about it.
Step1 - storing
I agree with you in using JSON files as data storage. For an app that is intended to be a "training gym" is more than enough and you have all the time in the world to start looking into databases like Postgres, MySQL or Mongo later on.
A JSON file to store a list of values may look like that:
{
"values": [
"Map: Battlefield",
"Map: Final Destination",
"Map: Pokemon Stadium II"
]
}
when you save this piece of code into list1.json you have your first data file.
Step2 - reading
Reading a JSON file in NodeJS is easy:
const list1 = require('./path-to/list1.json');
console.log(list.values);
This will load the entire content of the file in memory when your app starts. You can also look into more sophisticated ways to read files using the file system API.
Step3 - writing
Looks like you know your ways around in-memory array modifications using APIs like push() or maybe splice().
Once you have fixed the memory representation you need to persist the change into your file. You basically have to write it down in JSON format.
Option n.1: you can use the Node's file system API:
// https://stackoverflow.com/questions/2496710/writing-files-in-node-js
const fs = require('fs');
const filePath = './path-to/list1.json';
const fileContent = JSON.stringify(list1);
fs.writeFile(filePath, fileContent, function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
Option n.2: you can use fs-extra which is an extension over the basic API:
const fs = require('fs-extra');
const filePath = './path-to/list1.json';
fs.writeJson(filePath, list1, function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
In both cases list1 comes from the previous steps, and it is where you did modify the array in memory.
Be careful of asynchronous code:
Both the writing examples use non-blocking asynchronous API calls - the link points to a decent article.
For simplicity sake, you can first start by using the synchronous APIs which is basically:
fs.writeFileSync
fs.writeJsonSync
You can find all the details into the links above.
Have fun with bot coding!
I have a JSON API served by a Ruby on Rails backend. One of the endpoints returns an array of objects structured like this
{
"title_slug": "16-gaijin-games-bittrip-beat-linux-tar-gz",
"platform": "Linux",
"format": ".tar.gz",
"title": "BIT.TRIP BEAT",
"bundle": "Humble Bundle for Android 3",
"unique_games": 9
},
{
"title_slug": "17-gaijin-games-bittrip-beat-linux-deb",
"platform": "Linux",
"format": ".deb",
"title": "BIT.TRIP BEAT",
"bundle": "Humble Bundle for Android 3",
"unique_games": 9
},
Because there are different types of downloads for a single title the "Title" is not unique across several objects. I would like a count of only unique titles.
I was thinking of doing it in Ruby on Rails in the model and just sending it in the JSON response but that does not work because it needs the whole array to count them, obviously. I am using angular on the front end so I am thinking it needs to be done in the controller. I also filter the response in a table and want updated numbers of the unique titles being displayed.
Here's a screenshot of the page this is going on to get better perspective. http://i.imgur.com/Iu1Xajf.png
Thank you very much,
Thomas Le
BTW, this is a site I am developing that is not going to be a public website. It is a database site that holds all the data on the bundles I have bought from IndieGala and HumbleBundle. I am not going to make these links available to the public. I am making it more functional than the bare minimum because it is an open source project that I have on GitHub that people can use themselves locally.
Just in case people were wondering why I have Humble Bundle stuff listed on the image.
http://jsfiddle.net/hy7rasp4/
Aggregate your data in an array indexed by the unique key, Then you get access to information on duplicates and count.
var i,
title,
uniqueResults= {};
for (i in results) {
title= results[i].title;
if (!uniqueResults[title]) {
uniqueResults[title]= [];
}
uniqueResults[title].push(results[i]);
}
Maybe it would be better to restructure your data at the same time, so you can also get those items easily later as well as a quick lookup for the number of titles, e.g. in JavaScript
// assuming arrayOfObjects
var objectOfTitles = {},
i;
for (i = 0; i < arrayOfObjects.length; ++i) {
if (!objectOfTitles.hasOwnProperty(arrayOfObjects[i].title)) {
objectOfTitles[arrayOfObjects[i].title] = [];
}
objectOfTitles[arrayOfObjects[i].title].push(arrayOfObjects[i]);
}
var numberOfTitles = Object.keys(objectOfTitles).length;
// then say you choose a title you want, and you can do
// objectOfTitles[chosenTitle] to get entries with just that title