Why JSON updates are not showing in my webpage? - javascript

I have a JSON file. It runs successfully for the first time. But when I update something in my json file, that update is not showing in my webpage. Only the old data is showing. I tried to refresh page several times, but its not working.
I am using xampp local server for and ajax for my code to call json data.
Please help.
My JSON file
[
{
"name": "Aseem",
"age":29,
"salary":50000
},
{
"name": "John",
"age":23,
"salary":53000
},
{
"name": "Erica",
"age":25,
"salary":52000
}
]

Often of the contents of something isn't refreshing, it is likely as a result of the cache. Web browsers cache content that is loaded over the network, to reduce future loading times and data required to load those resources.
You can test this yourself, by opening the Developer Tools in your browser (Ctrl+Shift+I in Google Chrome), go to the Network tab (or similar) and look for a tick box that says 'Disable cache'. Now if you refresh the page you should be able to see the updated version.
If you wanted to fix this programmatically, an easy way is to add a query to the end of the URL. Since this is technically a new URL, the browser will re-request the resource, but the query will be ignored by the webserver, so there's no side effect. An example of this can be seen below:
let noCache = Date.now().toString(16);
url = `${url}?noCache=${noCache}`;
// Make request with URL

check 2 points
1, ajax data is new?
2. clear your broswer cache and try it again.

Related

How to obtain and manipulate the requestheaders using javascript in console?

I've encountered a paywall and I'm trying to bypass it using javascript in the console. I did some research and found a few different approaches, one of which is changing the requestheader in order to make a given website believe that you got there through a twitter link (thus allowing you to view the content for free). The function I use aims to change the referer by listening to the onBeforeSendHeaders event as specified on https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/onBeforeSendHeaders. It looks like the following (NOTE: This function is typed and executed directly inside of the devtools console):
function setReferer(x){
x.requestHeaders = x.requestHeaders.filter(function(header){
if(header.name === 'Referer'){
return false
return true
)}
x.requestheaders.push(
{
"name: "Referer",
"value": "https://t.co/" //Twitter website
}
return {requestHeaders: x.requestHeaders};
}
//this example uses chrome browser
chrome.webRequest.onBeforeSendHeaders.addListener(setReferer,
{
urls: ["<all_urls>"],
types: ["main_frame"], },
["requestHeaders", "blocking", "extraHeaders"] //extraHeaders meant to bypass CORS protocol
);
Unfortunately upon refreshing the window, this approach gives me folllowing error:
GET <some_url> net:ERR_BLOCKED_BY_CLIENT
Behind this error is the URL to the source code of the article, which I was able to load and copy into word, so I got the article I was looking for anyway. However I wasn't able to view it inside of the browsers main frame. Note that I am doing this only for the purpose of polishing my coding skills. I am trying to get a better understanding of the more complicated facets of the HTTP protocol, especially the way headers get sent clientside and interpreted serverside. If anyone knows more about the subject or knows / has a resource that he or she wants to share, this would me greatly appreciated!

How to get Workbox PWA to work with .php file

I am new to PWA and using Workbox; So I have this test folder with the following file structure, using localhost as my server (i.e. localhost/test)
index.html
test.css
test.jpg
test.js
sw.js (Code Shown below);
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.0.0/workbox-sw.js');
if (workbox) {
console.log(`Yay! Workbox is loaded 🎉`);
} else {
console.log(`Boo! Workbox didn't load 😬`);
}
//precache all the site files
workbox.precaching.precacheAndRoute([
{
"url": "index.html",
"revision": "8e0llff09b765727bf6ae49ccbe60"
},
{
"url": "test.css",
"revision": "1fe106d7b2bedfd2dda77f06479fb676"
},
{
"url": "test.jpg",
"revision": "1afdsoaigyusga6d9a07sd9gsa867dgs"
},
{
"url": "test.js",
"revision": "8asdufosdf89ausdf8ausdfasdf98afd"
}
]);
Everything is working perfectly fine, Files are precached and I didn't get this regular offline message when I am in offline mode, as shown in the image below.
So, I copied the exact folder to have test-2 folder, then renamed my index.html file to to index.php and in my sw.js file I updated the url to below code
{
"url": "index.php",
"revision": "8e987fasd5727bf6ae49ccbe60"
},
- Please note that I changed the revision value too
I did this because I want to implement PWA using Workbox into my own custom built single page app (but its in .php format).
Coming to my browser to run localhost/test-2 (normal mode), my files were precached too, including my index.php file (no error messages in my console and service worker was working perfectly fine); Only for me to switch to (offline mode) in my source tab and refresh my browser to test the offline experience and Alas! I got this Offline Message as shown in the Image below :(
I don't know what went wrong, I have no Idea what happened and I tried to google out some reasons for days but I don't seem to get any right and corresponding answer. Most of the tutorials out there is with .html
So the question is how can I implement PWA with .php file, so that when the user is offline, they dont get the normal You're offline Message but Instead my webpage should render?
Thanks in Advance
To elaborate to #pate's answer.
Workbox by default tries to make sure that pretty URL's are supported out of the box.
So in the first example, you cached /test/index.html. So when you request /test/, Workbox precaching actually checks the precache for:
/test/
/test/index.html
If your page was /test/about.html, and you visited the page /test/about precache would append a .html and check for that.
When you switched to the .php extension this logic would suddenly no longer work.
There are a few options to get this working:
If you are using any of the workbox tools to build your manifest, you are use the templatedUrls feature to map to a file (More details here):
templatedUrls: {
'/test-2/': ['/test-2/index.php']
}
If you are making the precache list yourself server side, you can just tell it to precache the URL /test-2/ without the index.php and precaching will simply cache that. Please note that you must ensure the revision changes with any changes to index.php.
If you aren't making the precache manifest, you can use urlManipulaion option to tell precache to check for URL's, (More details here):
workbox.precaching.precacheAndRoute(
[
.....
],
{
urlManipulaion: ({url}) => {
// TODO: Check if the URL ends in slash or not
// Add index.php based on this.
return [url, `${url}.php`, `${url}index.php`];
}
}
);
This is most likely because in the error showing screenshot you're trying to access test-2/ instead of test-2/index.php.
Workbox, in the background, falls back to trying index.html for every route that ends in a slash. For this reason, even if you don't have "/" cached SW tries to give you "/" + "index.html" which seems to be cached, and the page works offline.
I bet your page works if you try to access test-2/index.php while offline. Does it?

How made sw-precache with network first?

I have PWA web site with sw-precache with this sequence:
Reload the page
The service worker should update the cache in the background When its
done, you should see New or updated content is available. in the
console The actual visible changes should not be visible until the
next reload
Reload the page again
The browser will use the new cache this time around The changes
should be visible now! There shouldn't be any messages in the console
I need similar for this, but if file was updated - will be need visible in first load pages. Not after second load. Another (not updated) files - from cache.
This is possible?
Because double reload for see new changes uncomfortable.
you try use the runtimeCaching option to set up an appropriate URL pattern and strategy (networkFirst, cacheFirst, etc.) to match those requests.
"runtimeCaching": [{
"urlPattern": "",
"handler": "networkFirst"
}]
global.toolbox.router.default = global.toolbox.networkFirst; // for all routes
or
global.toolbox.router.get('/assets/(.*)', global.toolbox.networkFirst); // for some particular routes

How to access page sources from chrome devtools API

What is the easy way to access with the chrome devtools api all the content of the sources tab in the devtools?
I am writing a small program using nightmarejs to scrap some webpages. And I need to do some analysis, both on the rendered html and on the original one.
Nightmarejs doesn't provide an api call to get the source of the page. I am thinking about using the devtools api. But this is not clear to me how to do so. As I can see many files in the Sources tab of the chrome devtools, I thought I could get this content easily.
For now, I have a few leads:
The chrome.devtools.network API.
There is snippet on the documentation:
chrome.devtools.network.onRequestFinished.addListener(
function(request) {
if (request.response.bodySize > 40*1024) {
chrome.devtools.inspectedWindow.eval(
'console.log("Large image: " + unescape("' +
escape(request.request.url) + '"))');
}
});
I thin I could use a listener like this. And get the body if it's available in the response. But I don't find the documentation for this response content. Also, I don't want to store the result of all the requests.
But my main problem is that I don't see the content of the request here. I tried to do a request.getContent(), which returned me null.
chrome.debug
I didn't have time to play with it yet.

Firefox addon sdk: retrieving value from a different site based on clipboard content

I just got started with firefox addons to help my team fasten up our work, what i am trying to create:
When being on a specific site (let's call it mysite.com/input) i want to fill out automatically an input with an id: "textinput" from the value that is stored on the clipboard.
Yeah it is simple yet it would be simply enough to paste it, wouldn't it?... now here is the twist:
I need an other form of the value: on the clipboard it is x/y/z. There is a database site (let's call it database.com) on which searching like database.com?s=x/y/z would directly give the page from where it is possible to gain the correct value as it has an id: #result
I got lost how to properly communicate between page and content scripts, i'm not even sure in what order should i use the pagemod and the page-worker
Please help me out! Thank you!
The basic flow is this:
In your content script, you get the value form the form, somehow. I'll leave that up to you.
Still in the content script, you send the data to main.js using self.port.emit:
Code:
self.port.emit('got-my-value', myValue);
In main.js, you would then receive the 'got-my-value' event and make a cross-domain request using the request module.
Code:
require('page-mod').PageMod({
include: 'somesite.com',
contentScriptFile: data.url('somescript.js'),
onAttach: function(worker) {
worker.port.on('got-my-value', function(value) {
require('request').Request({
url: 'http://someurl.com',
onComplete: function(response) {
console.log(response);
// maybe send data back to worker?
worker.port.emit('got-other-data', response.json);
}
}).post();
});
}
});
If you need to receive the data back in the original worker, you would another listener for the event coming back.
Code:
self.port.on('got-other-data', function(value) {
// do something
})
I've been struggling with the same issue for the past 2 days until I found this:
https://developer.mozilla.org/en-US/Add-ons/SDK/Guides/Content_Scripts/Cross_Domain_Content_Scripts
They indicate the following:
However, you can enable these features for specific domains by adding
them to your add-on's package.json under the "cross-domain-content"
key, which itself lives under the "permissions" key:
"permissions": {
"cross-domain-content": ["http://example.org/", "http://example.com/"] }
The domains listed must include the scheme
and fully qualified domain name, and these must exactly match the
domains serving the content - so in the example above, the content
script will not be allowed to access content served from
https://example.com/. Wildcards are not allowed. This feature is
currently only available for content scripts, not for page scripts
included in HTML files shipped with your add-on.
That did the trick for me.

Categories