Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
does browsers cache interpreted javascript bytecode?
Depends on the Expires header (date/time after which the response is considered stale).
Basically, the first time your browser reaches to a server to get the file, the server responds with something like "here's the file, store it for as long as you can".
Then the browser uses its cache to store it there. The cache size is usually configurable so you can't know how much it is.
After a resource expires, the browser would then request and store it again.
Most CDNs would attempt to store their static resources for a year in your browser's cache. If they change something, they normally change the resource's name by appending a parameter (e.g. http://example.com/js/jquery.js?v=1) and your browser would recognize it as a new file.
All of the above is somewhat simplified but should work as a general description.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I read about Spectre (CVE-2017-5753) but it is unclear how it actually affect the everyday programmer? I read a couple articles but I am still not sure if it may break any of my old project or existing code. It would be great to know what I should look out for when trying to adapt to the changes Spectre introduced about how browsers processes JavaScript.
After researching I found some recommendations here.
Best practices summed briefly:
Prevent cookies from being loaded into the memory of the renderer using options present in the Set-Cookie header.
Make it hard to guess and access the URL of pages that contain sensitive information. If the URL is known to the attacker, the renderer might be forced to load it into its memory. Same-origin policies alone do not protect against these attacks.
Ensure that all responses are returned with the correct MIME type, and that the content type is marked as nosniff. This will prevent the browser from reinterpreting the contents of the response, and can prevent it from being loaded into the memory of the renderer when a malicious site tries to load it in certain ways.
References:
https://www.scademy.com/recovering-from-spectre-javascript-changes/
https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-class-timing-attack/
https://blogs.windows.com/msedgedev/2018/01/03/speculative-execution-mitigations-microsoft-edge-internet-explorer/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have an exe that I need to run on a site, I have access to it's database, I need to upload it and run it directly from the site, so how can I make it execute like in windows and how do I make it to show on the site(javascript?)
As per my comment this question isn't valid on this site but I'd like to point something out before it gets deleted :
Storing a whole .exe in a database sounds a bit odd. If you really need to then I guess you could store a byte array in the database and then re-write the exe to the filesystem from the byte array.
A better solution would be to store the .exe on the filesystem right away and only store the path to the executable in the database.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a text file of 200mb - 400 mb. I wanted to see these files on my web site. When I try to read this file using cgi script, the website becomes unresponsive. I tried reading the file in chunks but could not succeed. How should I accomplish this task. I am new to web technologies.
The best way to resolve your issue is to implement pagination so you could get pages of book pages (pun not intended), if the book you have is stored on the backend.
If it is stored on the client computer - are you sure that you need this? There might be some other ways to resolve the problem you are facing, yet loading a 200-400mb file onto the browser is really a bad idea.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Python code periodically updates the string data in a variable then using logfile.open, write & close() saves the data as a txt. When the txt has been saved AJAX.load needs to be able to recognize that an update has occurred then automatically execute a function. Is there a way to monitor the Python generated txt file for a change? Or monitor its date/time stamp for a change? that could be used to signal that the change has occurred then execute the JavaScript function?
If you're using node.js you can use https://www.npmjs.com/package/watch, but I doubt it can be used from the browser.
If you're using the browser, you should probably fall back to either polling (if you can afford additional traffic) or watcher on the server side (e.g. with watch) + notification to client (e.g. via socket.io)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm making a website for a client that will mostly be used offline through a wifi router. But there will also be an online version available. The purpose of this is to distribute files in parts of the world where infrastructure is not suitable for internet access. For those who do have internet access in some of these parts, the internet probably isn't very fast or reliable.
Some of the pages I've made can be accessed simply by using JS functions to hide one page and show another, instead of anchoring to another file. I figured this method might load content quicker, rather than linking to multiple pages. But is that true? Or should I just put all the content on separate pages?
Yes, that's true, but most browsers doesn't load a page if they don't get an answer, so you'll need at least one local server. You can store almost everything (style, script and content) in localstorage, store as strings and eval if/when needed. Also, if local processing isn't a problem you can use AngularJS to build and rebuild the page.