At the moment, im building a Website with Ruby on Rails.
Problem:
My Website is using different foreign API's for getting Data, for example the Amazon Product Advertising API. If i load e.g. 10 objects at once, it tooks to mutch time.
Its possible to load each object particular? (If one request finished push it with javascript on the page, or something like that) The user should be able to read the first objects while the rest of the content is loading in the background.
simple example:
list.each do |object|
result << AmazonRequest.getItem(object)
[And now push the changed result list to the view]
end
Is this possible? If yes, how?
Thanks :)
I don't really know RoR, but if you do not need to have all the different API results on the server before sending them out (which seems to be the case), you could just make multiple ajax requests and display content from the different APIs independently.
Related
I'll try to explain everything the way I understand it as clearly as possible, so correct me if I'm confused with something.
I was trying to scrape the users from a member list on a website, I used Python and the first thing I did was making a post request to the Request URL with the required headers so I get a response that contains the data I need but this didn't work, so I tried to find out the reason.
From what I understand now the website uses AJAX to make XHR and JavaScript calls which respond with the content (users).
The JS code is stored on a static website from what Chrome's developer tool request initiators
tell me (Here is an image for reference), which responds with the HTML that contains the users
The idea is to create a script that runs this static JS script that's stored online and fetch the data about the users from it. (Image for clarification)
How do I achieve this, I'm using python. What libraries do I need etc.? Any help/advice is greatly appreciated!
Based on your questions, I think you're trying to load data from a website that uses AJAX to load data.
In my opinion, have a look at Scray and some Headless Browers.
Check the flowing links for more information
https://scrapy.org/
https://github.com/puppeteer/puppeteer
https://github.com/pyppeteer/pyppeteer
Im kind of new to this and looking to expand pulling API results and displaying them on page, whether it's from a blog resource or content generation.
For example, I want to pull from VirusTotal's API to display returned content. What is the best way to capture that in an input tag and display it in a DIV. And what if it were an option to pull from different API's based on drop down selection?
An example of the API to pull content would be here https://developers.virustotal.com/reference#api-responses under the /file/report section.
To call the data from the API, you need to send a request. However, there is a problem with CORS. Basically, you can't call the website from inside your web browser from a page on your local machine, because your browser blocks the request. The web browser will only allow calls to and from the same server, except for a few exceptions.
There's two ways to approach this.
The simplest one is to make a program that calls the API and outputs an HTML file. You can then open that HTML file to read the contents. If you want to update the info, you would need to run that program once again manually. You could easily do this building off the python they provided.
The other, little bit more complex way, is where you host a server on your PC. When you go to the webpage on that server, it sends a request to the website, and then provides the latest information. There's tons of frameworks and ways to do this. For an absolute beginner on this subject, ExpressJS is a good start. You can make a hello world program, and once you do that you can figure out how to call the API whenever a page is loaded, and display the results.
The problem:
I have a jquery ajax (post) based website, where the website doesn't refresh every time user navigates to another page. Which means I have to pull data with ajax and present it to the user. Now for pulling small text data, this system works great. However once the text data is huge (let's say over 200,000 words), the load time is quite high (especially for mobile users). What I mean to say is, ajax tries to load full text information and displays it after it is done loading all text. So the user has to wait quite a bit to get the information.
If you look at a different scenario, let's say wikipedia. There are big pages in wikipedia. However, a user doesn't feel he/she has to wait a lot because the page loads step by step (top to bottom). So even if the page is large, the user is already kept busy with some information. And while the user is processing those, rest of the page keeps loading.
Question:
So is it possible to display, via ajax, information on real time load? Meaning keep showing whatever is loaded and not wait for the full document to be loaded?
Ajax (xmlhttprequest) is a really great feature in html5, for the same thing, ajax is better than socket, by that, I mean non-persistant connection but as soon as the connection is persistant (impossible for xmlhttprequest)socket is fastest.
The simplest way is to use web socket is socket.io but you need a JavaScript server to use this library and there is one host where you can get one for free with admin tools: heroku.
You can use PHP server if you dont want to use JavaScript server with the socketme library but it is a bit more complex.
Also, you can think diferently, you try to send a lot of data.
200 000 words is something like 70ko (I try a lorem ipsum), the upload speed is relative to data and connection speed/ping. You can compress by any way your data before sending and uncompress server-side. There is probably thousand way to do this but I think the simpliest way is to find a JavaScript library to compress/uncompress data and simply use your jquery ajax to send what's compressed.
EDIT — 21/03/14:
I misunderstood the question, you want to display the current upload progress ?
Yes, it is possible by using the onprogress event, in jQuery you must follow this simple exemple: jQuery ajax progress
I am building an analytics system for my rails application and I want to monitor every time I pull a certain object from the database, and I'd like to put the in the model file. I have objects that are being displayed on the page and I need to see the amount of views and clicks that they get. I assume the views can be handled by just figuring out when the object is pulled from the database (if someone could tell me how to do that) and I figured javascript to monitor the clicks. Would you all agree with this? Or is there a better way. I am using Rails 3.1 with MongoMapper and MongoDB
To store the data simply send an ajax request from the browser with the information you want to store in a POST request to a rails resource like :click#create. Be sure to include the relevant data attributes within the request.
You may want to collect the requests and then send them all in a batch based on time or a use clicking a "done" button or something of that sort.
Recording the fact that someone clicked (from javascript) is different than recording when an object is retrieved from the database. You could write a before filter for each of the methods in the class or possibly implement an active record callback for something of that sort.
I'm developing kind of image/profile search application, that is based almost exclusively on AJAX. Main page basically displays profile images and allows user to filter/search and paginate through them.
Pagination works when user scrolls, so the interface has to be very very fast. There will be only 6 (maybe 9, but definitely not more) images displayed on the main page, so users will scroll a lot. I'm currently using very simple JS cache to store results of all the requests in case user decides to go back ... in that case, I simply pull everything out of the cache instead of querying the server.
Client cache
One option that I thought of is to pre-load say 10 pages in front and store them in the cache.
But my biggest issue is filtering/searching, since that completely changes the type of query that goes to the server. My filters aren't very complex, only around 6-7 string/number/enum attributes.
Now if I wanted to do all the filtering in the cache, I would have to duplicate all the search logic and fetch all the data from the server (not just the data I'm displaying), so I could filter the results on client side.
Here raises a question, should I make the cache somehow persistent? Store it into a cookie maybe?
Server cache?
One suggestion might be to use memcached on the server and just store everything there. I'm definiely going to cache away all the results I can, but that doesn't save the server from handling loads and loads of AJAX requests.
I'm developing this application on Rails 3, and even though I love it, I wouldn't say it's the fastest thing in the world. Another option that this gives me is to create separate Rack/Sinatra application to handle only the AJAX requests. By this I mean requests from the main query, not all AJAX.
S3 for images?
Big part of this application are images, even though they're mostly small thumbnails (unless user wants to display it bigger).
At the moment, I don't have problems with bandwidth. My VPS host provides me with 200GB, which should be more than enough (I hope). The problem is loading speed. Would it help if I uploaded all the images to S3 and load them from there, or is this worth doing only for larger files? I'm going to load a lot of 100x150px images, which are generally under 50kB.
Have you looked at SlickGrid. It has an interesting idea of only building the list as users scroll down but then removing the list as the users scroll out of that range.