I'm fetching data from a PS4 Games API but it's split into 400+ pages. I wanted to get the data from all pages, but the solution I came up with did not work very well. It gives me an error 'JSON Value of type NSNull cannot be converted to a valid URL'. Also, I don't think the for loop works well either, it shows me it going through all the pages when it displays the results in my list.
Additionally, this API is dynamic because new games keep getting released. So how could I get data up to latest page without manually changing my last page number everytime? I looked at some questions here but I couldn't fit it into my code
My code is rather long so I'm just going to post the part that matter:
componentDidMount() {
var i;
for (i = 0; i < 400; i++) {
fetch(`https://api.rawg.io/api/games?page=${i+1}&platforms=18`, {
"method": "GET",
"headers": {
"x-rapidapi-host": "rawg-video-games-database.p.rapidapi.com",
"x-rapidapi-key": "495a18eab9msh50938d62f12fc40p1a3b83jsnac8ffeb4469f"
}
})
.then(res => res.json())
.then(json => {
const { results: games } = json;
this.setState({ games });
//setting the data in the games state
});
}
}
The API also has an item that gives me the link of the next page, I think there is a way to use 'next' and fetch data from that URL
If anyone could help, that would be AWESOME. Thank you in advance
its my first answer on this forum, so... I hope be helpfull.
For me you have 2 options:
Each request send 2 variables in the answer count and next, or you make a for loop with count/20 as limit (20 is the number of items gaves in the answer), or you make a while loop until the next variable give null as answer (currently at page 249).
What is currently happening is you are making 400 requests and when each comes in it is overwriting the component with the response it received. It does not care about what is already there or any of the other requests.
An approach you could try instead is as the responses come in append the results to the ongoing list and update the state with the running list.
Going forward and for your other question about handling new releases. Instead of running 400 queries every time the application is used, try looking into caching the results. When the app loads you can see if cache exists and load or query if it does not. The rawg.io /games endpoint has a parameter for ordering by release. When the application loads in future you can conditionally loop until you reach a game that is already in cache at which point terminate.
I'm trying to set up a history command for when a player gets warned with a moderation bot. I have it where it saves it in a JSON file however I don't know how to save multiple cases for one player. Right now it just replaces the last warn.
//--Adds To Logs
let warnhist = JSON.parse(fs.readFileSync("./historywarn.json", "utf8"));
warnhist[wUser.id] = {
Case: `${wUser} was warned by ${message.author} for ${reason}`
};
fs.writeFile("./historywarn.json", JSON.stringify(warnhist), (err) => {
if (err) console.log(err)
});
It saves like this without adding onto it every time:
{"407104392647409664":{"Case":"<#407104392647409664> was warned by <#212770377833775104> for 2nd Warn"}}
I need it to save like this:
{
"407104392647409664":{"Case":"<#407104392647409664> was warned by <#212770377833775104> for 1st Warn", "Case 2":"<#407104392647409664> was warned by <#212770377833775104> for 2nd Warn" }
}
You want an array instead of an object. Try structuring your data a bit more too, so your JSON looks like this:
{ "<user id>": { "warnings": [...] }
Then instead of overriding the history every time with an assignment warnhist[wUser.id] = ... you can use Array.prototype.push to add a new one.
Taking a step back though, I don't think JSON is a good strategy to store this data, especially if it contains all warnings for all users. You need to load the entire JSON and read the file and write the file every single time you add a warning, and there could be conflicts if you have multiple instances of your service doing the same for a large number of users.
For a super similar API and storage format but better performance and consistency, try MongoDB instead of a plain JSON file.
I'm creating a mock application with JSON server as the backend and I'm wondering if it is possible to get the total number of records contained at an end point without loading all the records themselves? Assuming the db.json file looks like the JSON snippet below, how would I find out that the end point only has one record without fetching the record itself, provided it's possible?
{
"books": [{
"title": "The Da Vinci Code",
"rating": "0"}]
}
You can simply retrieve the X-Total-Count header
This is a screen-shot of a response headers returned by JSON Server when enabling pagination i.e using the _page parameter (e.g. localhost:3000/contacts?_page=1)
Whenever you fetch the data, json-server actually returns the total count by default (it has an x-total-count property:
Example:
axios
.get("http://localhost:3001/users", {
params: {
_page: 1,
_limit: 10
}
})
.then(res => {
console.log(res.data); // access your data which is limited to "10" per page
console.log(res.headers["x-total-count"]); // length of your data without page limit
});
You've three options. I'd recommend the 3rd one to you:
Return all the records and count them. This could be slow and send a lot of data over the wire but probably is the smallest code change for you. It also opens you up to attacks where people can hammer your server by requesting many records repeatedly.
Add a new endpoint. You could add a new endpoint that simply returns the count. It's simple but slightly annoying having a 2nd endpointime to document and maintain.
Modify the existing endpoint. Return something like
{
count: 157,
rows: [...data]
}
The benefit of 3 is its all in one endpoint. It also nears you toward a point where you can add a skip and take parameter in future to allow pagination of the resultant data.
You will write another end point that returns number of records. Usually also you may want end point for limit and offset to be used with pagination.
let response = await fetch("http://localhost:3001/books?_page=1");
let total = response.headers.get('X-Total-Count');
I've been searching around the internet for a way to define a query in JavaScript, pass that query to PHP. Let PHP set up a MySQL connection, execute the query and return the results json encoded.
However my concern is with the security of this method since users could tamper with the queries and do things you don't want them to do or request data you do not want them to see.
Question
In an application/plugin like this, what kind of security measures would you suggest to prevent users from requesting information I don't want them to?
Edit
The end result of my plugin will be something like
var data = Querier({
table: "mytable",
columns: {"column1", "column2", "column3"},
where: "column2='blablabla'",
limit: "10"
});
I'm going to let that function make an AJAX request and execute a query in PHP using the above data. I would like to know what security risks this throws up and how to prevent them.
It's unclear from your question whether you're allowing users to type queries that will be run against your database, or if your code running in the browser is doing it (e.g., not the user).
If it's the user: You'd have to really trust them, since they can (and probably will) destroy your database.
If it's your code running in the browser that's creating them: Don't do that. Instead, have client-side code send data to the server, and formulate the queries on the server using full precautions to prevent SQL Injection (parameterized queries, etc.).
Re your update:
I can see at least a couple issues:
Here's a risk right here:
where: "column2='blablabla'"
Now, suppose I decide to get my hands on that before it gets sent to the server and change it to:
where: "column2=');DROP TABLE Stuff; --"
You can't send a complete WHERE clause to the server, because you can't trust it. This is the point of parameterized queries:
Instead, specify the columns by name and on the PHP side, be sure you're doing correct handling of parameter values (more here).
var data = Querier({
table: "mytable",
columns: {"column1", "column2", "column3"},
where: {
column2: {
op: '=',
value: 'blablabla'
}
}
limit: "10"
});
Now you can build your query without blindly trusting the text from the client; you'll need to do thorough validation of column names, operators, etc.
Exposing information about your scheme to the entire world is giving up information for free. Security is an onion, and one of the outer layers of that onion is obscurity. It's not remotely sufficient unto itself, but it's a starting point. So don't let your client code (and therefore anyone reading it) know what your table names and column names are. Consider using server-side name mapping, etc.
Depending on how you intend to do, you might have a hole bigger than the one made in this economy or no hole at all.
If you are going to write the query on client-side, and send to php, I would create a user with only select, insert, delete and update, without permissions to access any other database.
Ignore this if you use SQlite.
I advise against this!
If you build the query on server-side, just stuff to the server the data you want!
I would change the code into something like this:
var link = QuerierLink('sql.php');//filename to use for the query
var data = Querier('users',link);//locks access to only this table
data.select({
columns: ['id','name','email'],
where: [
{id:{'>':5}},
{name:{'like':'%david%'}}
],
limit:10
});
Which, on server-side, would generate the query:
select `id`,`name`,`email` from `db.users` where `id`>5 and `name` like '%david%' limit 10
This would be a lot better to use.
With prepared statements, you use:
select `id`,`name`,`email` from `db.users` where `id`>:id and `name` like :name limit 10
Passing to PDO, pseudo-code:
$query='select `id`,`name`,`email` from `'.$database_name.'.users` where `id`>:id and `name` like :name limit 10';
$result=$PDO->exec($query,array(
'id'=>5,
'name'=>'%david%'
)
);
This is the prefered way, since you have more control over what is passed.
Also, set the exact database name along the name of the table, so you avoid users accessing stuff from other tables/databases.
Other databases include information_schema, which has every single piece of information from your entire databasem, including user list and restrictions.
Ignore this for SQlite.
If you are going to use MySQL/MariaDB/other you should disable all read/write permissions.
You really don't want anyone writting files into your server! Specially into any location they wish.
The risk: They have a new puppy for the attackers to do what they wish! This is a massive hole.
Solution: Disable FILE privileges or limit the access to a directory where you block external access using .htaccess, using the argument --secure_file_priv or the system variable ##secure_file_priv.
If you use SQlite, just create a .sqlite(3) file, based on a template file, for each client connecting. Then you delete the file when the user closes the connection or scrap every n minutes for files older than x time.
The risk: Filling your disk with .sqlite files.
Solution: Clear the files sooner or use a ramdisk with a cron job.
I've wanted to implement something like this a long ago and this was a good way to exercice my mind.
Maybe I'll implement it like this!
Introducing easy JavaScript data access
So you want to rapidly prototype a really cool Web 2.0 JavaScript application, but you don't want to spend all your time writing the wiring code to get to the database? Traditionally, to get data all the way from the database to the front end, you need to write a class for each table in the database with all the create, read, update, and delete (CRUD) methods. Then you need to put some marshalling code atop that to provide an access layer to the front end. Then you put JavaScript libraries on top of that to access the back end. What a pain!
This article presents an alternative method in which you use a single database class to wrap multiple database tables. A single driver script connects the front end to the back end, and another wrapper class on the front end gives you access to all the tables you need.
Example/Usage
// Sample functions to update authors
function updateAuthorsTable() {
dbw.getAll( function(data) {
$('#authors').html('<table id="authors"><tr><td>ID</td><td>Author</td></tr></table>');
$(data).each( function( ind, author ) {
$('#authors tr:last').after('<tr><td>'+author.id+'</td><td>'+author.name+'</td></tr>');
});
});
}
$(document).ready(function() {
dbw = new DbWrapper();
dbw.table = 'authors';
updateAuthorsTable();
$('#addbutton').click( function() {
dbw.insertObject( { name: $('#authorname').val() },
function(data) {
updateAuthorsTable();
});
});
});
I think this is exactly what you're looking for. This way you won't have to build it yourself.
The more important thing is to be careful about the rights you grant to your MySQL user for this kind of operations.
For instance, you don't want them to DROP a database, nor executing such request:
LOAD DATA LOCAL INFILE '/etc/passwd' INTO TABLE test FIELDS TERMINATED BY '\n';
You have to limit the operations enabled to this MySQL user, and the tables he has accessed.
Access to total database:
grant select on database_name.*
to 'user_name'#'localhost' identified by 'password';
Access to a table:
grant select on database_name.table_name
to 'user_name'#'localhost' identified by 'password';
Then... what else... This should avoid unwanted SQL injection for updating/modifying tables or accessing other tables/databases, at least, as long as SELECT to a specific table/database is the only privillege you grant to this user.
But it won't avoid an user to launch a silly bad-performance request which might require all your CPU.
var data = Querier({
table: "mytable, mytable9, mytable11, mytable12",
columns: {"mytable.column1", "count(distinct mytable11.column2)",
"SUM(mytable9.column3)"},
where: "column8 IN(SELECT column7 FROM mytable2
WHERE column4 IN(SELECT column5 FROM mytable3)) ",
limit: "500000"
});
You have to make some check on the data passed if you don't want your MySQL server possibly down.
I am developing an app with Mongo, Node.JS and Angular
Every time the object is delivered and handled in the front-end, all objectId's are converted to strings (this happens automatically when I send it as json), but when when I save objects back into mongo, I need to convert _id and any other manual references to other collections back to ObjectID objects. If I want to nicely separate database layer from the rest of my backend, it becomes even more messy, lets assume my database layer has the following signature:
database.getItem(itemId, callback)
I want my backend business treat itemId as opaque type (i.e no require'ing mongo or knowing anything about ObjectId outside of this database layer), yet at the same time I want to be able to take the result of this function and send it directly to
the frontend with express js.
exports.getItem = function(req, res) {
database.getItem(req.params.id, function(err, item) {
res.json(item);
});
};
What I end up doing now is:
exports.getItem = function(itemId, callback) {
if (typeof itemId == 'string') {
itemId = new ObjectID(itemId);
}
var query = {_id: itemId};
items.findOne(query, callback);
};
This way it can handle both calls which come from within the backend, where itemId reference might be coming from another object and thus might already be in the right binary format, as well as requests with string itemId's.
As I already mentioned above, when I am saving an object that came from front-end and which contains many manual references to other collections that is even more painful, since I need to go over the object and change all id strings to ObjectIds.
This all feels very wrong, there must be a better way to do it. What is it?
Thanks!