Creating a drop down options list using PokeApi - javascript

I am starting to build my first app using the pokeapi.co. I had an idea to do a drop downlist of all 1000~ pokemon and wanted to pull the data from the api. When I request data from "https://pokeapi.co/api/v2/pokemon" it does give me an array of pokemon names but it is limited to 20. Is there a way I can set the limit to max or all? Also I am not sure if this is the best way to implement the drop down menu so any additional advice or approach is welcome.

For pokeapi.co, calling any API endpoint without a resource ID or name will return a paginated list of available resources for that API. By default, a list "page" will contain up to 20 resources. If you would like to change this just add a 'limit' query parameter to the GET request, e.g. ?limit=1000. Like this:
https://pokeapi.co/api/v2/ability/?limit=1000
In my opinion, requesting 1000 items at a time is not an elegant way to do, not to mention the long time the user has to wait for the returned data. Instead, I suggest you to turn it into a table with a search option, split it into pages with a 20-30 items a page, fetching more data as soon as the user moves to a new page

Related

How do I efficiently load someone's steam inventory and price of their every item on my website?

I am running Node.js server and I want to load user's inventory so they can see their item and values of each item using JavaScript. I have tried to approach this problem by making multiple XHR request. First request requests user's items and the others are for each item to get their value. Of course, having this much XMLHttpRequests causes problems, and I am not able to get all of the items displayed if any. How should I get this working? Should I store player's inventory instead of loading it every time? But what if the contents change? Also, I need to find a way for getting price of each item more efficiently. Should I store prices somewhere in my database?
I'm using following APIs
Get inventory contents: https://steamcommunity.com/profiles/{STEAMID}/inventory/json/730/2
Get price of item: https://steamcommunity.com/market/priceoverview/?currency=1&appid=730&market_hash_name={ITEM'S MARKET_HASH_NAME}
The market/priceoverview endpoint has a very low rate limit and you probably don't want to call it every time you need the price of an item. Storing the result in a database is definitely a good start but depending on how many items you want to track that wont be enough. A good workaround would be to use a third party API. Either use prices from a different marketplace, which provides an API to get the prices for all your item's in a single call or an API which provides cached data from Steam.
CSGO Trader provides an API which combines prices over the most common marketplaces (including the Steam Market). https://prices.csgotrader.app/latest/prices_v6.json hosts a list for all CS:GO Items which is updated every 8 hours. For Market Data of other app's you depend on Services like SteamApis.

When populating drop-down options in a form, it is better to use a filter or request new data?

I am performing an axios ajax request to my database for a JSON list of tags related to a topic which is selected using a dropdown. If no topic value is passed, then I get a list of every tag in the database (maybe some 100-200 tags at the moment).
The steps are:
User selects a topic from the downdown
Listen for the onChange event and pass the value selected to my API using an axios get().
Receive the pre-filtered list as a JSON array of objects from the database based on the value sent from topic
Obviously every time the user changes the topic, another call to the API/Database is made. I have seen (but never used) another option of filter
When designing a form, would it be better to load all the form's option values on beforeMount() and then filter them depending on what is selected? Or is waiting until the user selects an option before loading other options a better practice?
If amount of tags wouldn't grow dramatically I recommend to load them all at once and then just filter them by using a computed prop.

Store header data in url

We are building a shopping website and need to display hundreds of products and their images. We don't have the product images, but a separate company's service does.
I was given access to this API service. GETting a product image at endpoint https://imageservice.com/api/v3/images/{product_UPC} requires header Ocp-Apim-Subscription-Key.
This returns a response with an array of applicable product image variants. Each image in the response array is at a URL like https://imageservice.com/api/v3/asset/[hash]. These hashed image URLs are one-time use and must be requested with the Ocp-Apim-Subscription-Key again to actually display the image.
This makes the request process to our own API for products difficult as we cannot seed our database with products and their respective image URLs. Instead, each time our shopping site requests products from our own database, we must request the images separately by product ID, loop through the products and match images to each.
Additionally, their service is throttled to about 20 requests every 10 seconds and we load 30-40 products in each paginated call. This will slow the display of products if we don't initially load the URLs into the database.
Question: We already have a paid license and API key to use this separate image API service, and I already have each product in our DB stored with its correct image URL. The problem is we need to pass the header Ocp-Apim-Subscription-Key along with the URL without forming an entirely new AJAX request. How can this be done?
The problem is we need to pass the header Ocp-Apim-Subscription-Key along with the URL without forming an entirely new AJAX request. How can this be done?
If your question is how to query the image without making requests, then short answer is no, you can’t. Headers are made for security reasons. If the target service requires headers and not only URL parameters, then you’ll have to send a new request each time you want to retrieve an image, following each step they require.
I see only 2 solutions
Look further into your API
The API used by the company you retrieve images for may have some deeper options, to allow you more control over it. For example, a function to retrieve many images at once, in one single request.
Take a greater look at it, it might be your best solution.
Not using request
If neither of above proposals fits your desire, then you have no choice but storing images on your own. You deal with an external API, which seem to be private. Thus, you are limited by the work of your partner company, which is totally normal and expected. They may have put those limitations for a good reason, and overpassing them can lead to unsecure behaviors.
If you want maximum control, then you need to handle the most of it by yourself. If you are tied to your partner company work, then you have to see with them what permission they can give you and how you can maximize profits from their work.
EDIT
You can also format your requests, using AJAx or alternatives, such as Axios. Take a greater look at this one. This will, at least, avoid you setting all request parameters on each call.

How to paginate a search result using same query with different skip and limit values

Some of you might argue that this is a question for programmers.stackexchange.com but, having read through Help Center of Stack Overflow I believe this is a specific programming problem and I am more likely to get a response here.
I have a webapp that uses ExpressJS with Neo4j database as backend. I have a search screen, where I would like to use the power of Neo4j's relationships. The search screen accepts one or more values (i.e. manufacture year, fuel type, gearbox, etc etc) and then make a post request to ExpressJS where I construct a cypher query using the parameters of POST request as shown below as an example:
MATCH
(v:VEHICLE),(v)-[:VGEARBOX_IS]->(:VGBOX{type:'MANUAL'}),
(v)-[:VCONDITION_IS]->(:VCONDITION{condition:'USED'})
WITH DISTINCT v
WHERE v.manufacture_year = 1990
MATCH (v)-[r]->(info)
RETURN v AS vehicle, COLLECT({type:type(r), data:info}) AS details
Let's say that running above query, returns the following three vehicles and its properties
If more than 20 vehicles in result then I want to paginate the result and I know how that works, we make use of SKIP and LIMIT as shown below:
MATCH
(v:VEHICLE)
OPTIONAL MATCH (v)-[r:VFUEL_TYPE|:VGEARBOX_IS|:VHAVING_COLOR|...]->(info)
RETURN
v.vehicle_id AS vehicle_id,
v.engine_size AS engine_size,
v.published_date AS published_date,
COUNT(v) AS count,
COLLECT({type:type(r), data:info}) as data
ORDER BY v.published_date DESC
SKIP 20
LIMIT 16
Here is the workflow,
User navigates to search screen which is a form with POST method and various input fields.
User selects some options based on which he/she wish to search.
User then submits the form, which makes a post request to the server.
This request is handled by a ROUTE which uses the parameters of the request to construct a cypher query shown above. It runs the cypher against the Neo4j database and receive the result.
Let's assume, there are 200 vehicles that match the search result. I then want to display only 20 of those results and provide a next/previous button.
When user is done seeing the first 20, he/she wants to see the next 20, that is when I have to re-run the same query that user submitted initially but, with SKIP value of 20 (SKIP value keeps incrementing 20 as user navigates to next page, and decrement 20 as your moves to previous page).
My question is, what is the best approach to save search request (or the cypher generated by original request) so that when user clicks next/previous page, I re-run the original search cypher query with different SKIP value? I don't want to make a fresh POST request every time the user goes to next/previous page. This problem can be resolved in the following manner but, not sure which is more performance-friendly?
Every time the user clicks next or previous page, I make a new POST request with preserved values of original request and rebuild the cypher query (POSTs can be costly - I want to avoid this, please suggest why this is better option)
I store the original cypher query in Redis and whenever the user clicks next or previous, I retrieve the query specific to that user (need to handle this either via cookie, session or some sort of hidden uuid) from Redis, supply the new value for SKIP and re-run it (somehow I have to handle when I should delete this entry from Redis - deletion should happen when user change his search or abandon the page/site).
I store the query in session (user does not have to be logged in) or some other temporary storage (than Redis) that provide fast access (not sure if that is safe and efficient)
I am sure somebody came across this issue and it in an efficient manner which is why I post the question here. Please advise how I can best resolve this problem.
As far as performance goes, the first thing that you should absolutely do is to use Cypher parameters. This is a way to separate your query string from your dynamic data. This has the advantage that you are guarded against injection attacks, but it also is more performance because if your query string doesn't change, Neo4j can cache a query plan for your query and use it over and over again. With parameters your first query would look like this:
MATCH
(v:VEHICLE),(v)-[:VGEARBOX_IS]->(:VGBOX{type: {vgearbox_type}}),
(v)-[:VCONDITION_IS]->(:VCONDITION{condition: {vcondition}})
WITH DISTINCT v
WHERE v.manufacture_year = {manufacture_year}
MATCH (v)-[r]->(info)
RETURN v AS vehicle, COLLECT({type:type(r), data:info}) AS details
SKIP ({page} - 1) * {per_page}
LIMIT {per_page}
Your javascript library for Neo4j should allow you to pass down a separate object. Here is what the object would look like represented in json:
{
"vgearbox_type": "MANUAL",
"vcondition": "USED",
"manufacture_year": 1990,
"page": 1,
"per_page": 20
}
I don't really see much of a problem with making a fresh query to the database from Node each time. You should benchmark how long it actually takes to see if it really is a problem.
If it is something that you want to address with caching, it depends on your server setup. If the Node app and the DB are on the same machine or very close to each other, probably it's not important. Otherwise you could use redis to cache based on a key which is a composite of the values that you are querying for. If you are thinking of caching on a per-user basis, you could even use the browser's local storage, but are users often re-visiting the same pages over and over? How static is your data and does the data vary from user to user?

How to keep the filters from DataTables after a Back/Forward or Refresh

We're using DataTables as our table, and we're having a problem/disagreement with somehow keeping the history of filters that were applied to the table before, so that users can back/forth and refresh through these.
Now, one solution that was proposed was that I keep the filters string in the URL, and pass it around as a GET request, which would work well with back/forth and refresh. But as I have very customized filtering options (nested groups of filters), the filter string gets quite long, actually too long to be able to pass it with the GET request because of the length limit.
So as GET is out of the question, the obvious solution would be a POST request, and this is what we can't agree upon.
First solution is to use the POST request, and get the "annoying" popup every time we try to go back/forth or refresh. We also break the POST/Redirect/GET pattern that we use throughout the site, since there will be no GET.
Pros:
Simple solution
No second requests to the server
No additional database request
No additional database data
Only save the filter to the database when you choose to, so that you can re-apply it whenever you want
Cons:
Breaks the POST/Redirect/GET pattern
Having to push POST data with pushState (history.js)
How to get refresh to work?
Second solution is to use the POST request, server side saves the data in the DB, gets an ID for requesting the saved data, returns it, and the client then does a GET request with this ID, which the server side matches back to the data, returning the right filter, thus retaining the POST/Redirect/GET pattern. This solution makes two requests and saves every filter that users use to the database. Each user would have only a limited number of 'history' filters saved in the database, the older ones getting removed as new ones are applied. Basically the server side would shorten your URL by saving the long data to the database, like an URL shortening site does.
Pros:
Keeps the POST/Redirect/GET pattern
No popup messages when going back/forth and refreshing the page due to the post data being sent again
Cons:
Complicated solution
Additional request to the server
Additional request to the database
A lot of data in the database that will not be used unless the user goes back/forth or refreshes the page
A third solution would be very welcome, or pick one of the above and ideally explain why.
This is a fleeting thought i just had...you can save state of length, filtering, pagination and sorting by using bStateSave http://datatables.net/examples/basic_init/state_save.html
My thought was, theoretically you could save the cookie generated by datatables.js into a database table, like you mention in the second solution, but the request only has to happen each time you want to overwrite the current filter, replacing the current cookie with the previous "history" cookie

Categories