I have some Javascript (initially CoffeeScript) code that uses data from the database. I create the data array with .erb, then do the code with .coffee, and finally use the .js. Fairly standard stuff. The problem is that the javascript served to the browser is cached and never refreshes, even after a server restart or assets:clean.
To get the javascript to refresh I have to delete the tmp folder contents and start the server.
Q1. How can I make the javascript refresh on every request? I tried to inline the code using Coffeebeans to render a file with the code but the code appears html encoded.
Q2. Will this even work on Heroku? (it only accepts precompiled assets)
Related
Normally when you have a .PHP file and the client request it, the PHP code is run on the server and the HTML and JavaScript are sent to the client.
Question
Is it possible to have the server request a webpage (local) and run both the PHP code and the HTML with JavaScript on the server? I have created a single .html file that after 3 seconds of processing locally creates the image data for a thumbnail of the given video.
Why
I need to generate a thumbnail for a video. I used shared hosting and my hosting provider doesn't support for ffmpeg. You can, however, generate thumbnails using a canvas and JavaScript. I have already put a lot of pressure on the client. If this is possible, upload and download times would be significantly shorter than using the client.
Attempts
I've tried using file_get_contents(), but it doesn't run the code (Makes sense). Is there a way I could have it open and run for x seconds and then grab the contents?
I've tried using curl to get the file using this function here. I believe it is similar to my previous attempt in that it gets the file contents, but never executes them.
My final attempt was to use new DOMDocument(). I couldn't even get to loading the page though. First, I can't parse it with a video tag. It gives this error:
Warning: DOMDocument::load(): Specification mandates value for attribute controls in
file:\path\to\html\document.html, line: 53 in C:\path\to\php\document.php on line 50
If I were to remove the video tag (which is required), I get errors while parsing my JavaScript. So that attempt also did not work.
Is there a way that I could have PHP process the code (for something on the server) for x seconds before getting the contents? It would allow for time to generate the thumbnail data. If there is another way to do this without using ffmpeg on the server, that would be great.
So as I mentioned in comments, what I'm gonna explain is just an option (not the best one and just answering for your need of running html code!)
Where to do this?
Personally I rather to do this when the video is being uploaded by admin's browser and the best thing is that you can do this as a part of the posting procedure.
So in the page that you want this process to be done, put an invisible iframe like this.
<iframe id="myIframe" style="display: none;"></iframe>
How to begin the process?
I don't know the way you use to upload the videos (and it really is not that important!) but let's assume you want to use formdata. After the video is uploaded you need to know something unique to address the video (let's say an id). So after the video is uploaded, we can recive a code like id:20, initiateThumbnail:true as the result json data. Then we can simply use that hidden iframe to be the browser you've been asking for like this:
$("#myIframe").attr("src","dothething.php?video=20");
Now do what ever you wanted to do in it and change it's content after it's done. Now you need to wait for the result!
$('#myIframe').load(()=>{
let result = $("#myIframe").contents();
// checking result!
});
As you have already thought about, you can handle any errors by processing the result.
Notes
The event listener we used for iframe (iframe.load) fires when you initiate making the thumbnail as well. So be careful with the process of checking result (content of that iframe!)
If you don't use ajax or formdata, simply the action of your form is what I used as iframe.
One question? What happens if network connection goes down during this process? Simple answer! You can check in so many ways that the thumbnail exists or not. If not you can create it once that user requests for it in his browser and upload it back to server and save it for ever (as you did it in admin's panel!)
I think there isn’t another way to generate thumbnail on php server than with ffmpeg.
The only thing you can do, I suppose, is to force canvas generation on page load if you aren’t already doing it.
Anyway you are trying to do something wrong. Php doesn’t evaluate the html code, it’s just a preprocessor and not an interpreter like the browser. You can wait all the time of the world, but you’ll never get the content of the image that only a browser will generate.
I know it's possible to force reload from server using location.reload(true). However, let's say I used that to refresh index.html. If index.html loads a bunch of javascript files, those are still coming from the cache for me. Is there any way to ignore the cache for the duration of a request?
My use case is that I'm doing AB testing on my app, and want to provide a way for users to go back to the old version if something isn't working. But some of the URLs are the same, even though the files between versions are different. It would be nice to be able to handle this in JS rather than having to change every URL on the new version.
There is actually at least 535 different ways to reload a page via javascript, FYI ;).
Have you tried to put document on front? document.location.reload(true);
Try also this other option:
window.location.href = window.location.href;
or
history.go(0);
Sure, both are soft reload, but seems to work in certain situation.
If nothing works, you have to append random data to the url (like timestamp) to force the download from server, bypassing the cache.
If you want to bypass browser taking js files from cache, you need to fetch from server not just files like script.js but rather script.12345.js When you update your file on server, you change file's hash number to let's say script.54321.js And browser understands that the file is different, it must download it again. You can actually use Webpack for this purpose to automate things. In output instead of {filename: bundle.js} you write {filename: bundle.[hash].js}
I am trying to achieve the below in ASP.NET MVC3 web application which uses razor.
1) In my Index.cshtml file, I have the below reference.
<script src="/MySite/Scripts/Main.js"></script>
2) I load my home page for the first time and a http request is made to fetch this file which returns 200.
3) Then, I made some changes to the Main.js and saved it.
4) Now I just reload the home page (please note that I am not refreshing the page) by going to the address bar and typing the home page url and pressing enter. At this point, I want the browser to fetch the updated Main.js file by making a http request again.
How can I achieve this? I don't want to use System.Web.Optimization bundling way. I knew that we can achieve this by changing the URL (appending version or some random number) everytime the file changes.
But the challenge here is the URL is hardcoded in my Index.cshtml file. Everytime when there is a change in Main.js file, how can I change that hardcoded URL in the Index.cshtml file?
Thanks,
Sathya.
What I was trying to achieve is to invalidate browser cache as soon as my application javascript file (which already got cached in the browser) gets modified at the physical location. I understood that this is simply not achievable as no browsers are providing that support currently. To get around this below are the only two ways:
1)Use MVC bundling
2)Everytime the file is modified, modify the URL by just appending the version or any random number to the URL through querystring. This method is explained in the following URL - force browsers to get latest js and css files in asp.net application
But the disadvantage with the 2nd method is, if there are any external applications referring to your application's javascript file, the browser cache will still not be invalidated without refreshing the external application in browser.
Just add a timestamp as a querystring parameter:
var timestamp = System.DateTime.Now.ToString("yyyyMMddHHmmssfff");
<script src="/MySite/Scripts/Main.js?TimeStamp=#timestamp"></script>
Note: Only update TimeStamp parameter value, when the file is updated/modified.
It's not possible without either using bundling (which internally handles version) or manually appending version. You can create a single file bundle as well if you want.
I have a javascript file which internally calls a function to load an xml file.
$(document).ready(function()
{
urlVal ="web/help.xml";
}
The javaxcript is versioned so that the browser always loads it instead of
caching it
"./js/help_min.js?ver=${verNumber}"
I am facing an issue where browser downloads the latest js file but has cached help.xml included in js file.
is there a way that the browser will always load latest "hepl.xml" rather than caching it.
The proper apporach would be to fix the backend to send headers telling the browser not to cache the data (see i.e. How to control web page caching, across all browsers?). But if you cannot do that, make the request unique each time, i.e.
"./js/help_min.js?ver=${verNumber}&random=${something_random}"
where something_random value of random can be i.e. current time stamp (with millis). That way your request will not match the cache entry enforcing fetch on each request.
PS: you seem to also have design flaw, as by logic using the same ${verNumber} should return the same data, hence caching would be more than welcome to reduce the traffic and speed up loading time.
I am using angular and ASP.NET Web API to allow users to download files that are generated on the server.
HTML Markup for download link:
<img src="/content/images/table_excel.png">
<a ng-click="exportToExcel(report.Id)">Excel Model</a>
<a id="report_{{report.Id}}" target="_self"></a>
The last anchor tag is there to serve as a place holder for an automatic click event. The visible anchor calls the exportToExcel method to initiate the call to the server and begin creating the file.
$scope.exportToExcel = function(reportId) {
reportService.excelExport(reportId, function (result) {
var url = "/files/report_" + reportId + "/" + result.data.Model.fileName;
var dLink = document.getElementById("report_" + reportId);
dLink.href = url;
dLink.setAttribute('download', result.data.Model.fileName);
dLink.click();
});
}
The Web API code creates an Excel file. The file, on the server is about 279k, but when it is downloaded on the client it is only 7k. My first thought was that the automatic click might be happening before the file is completely written. So, I added a 10 second $timeout around the click event as a test. It failed with the same result.
This seems to only be happening on our remote QA server. On my local development server I always get the entire file back. I am at a loss as to why this might be happening. We have similar functionality where files are constructed from a database blob and saved to the local disk for download. The same method is employed for the client side download and that seems to work fine. I am wondering if anyone else has run into a similar issue.
Update
After the comment by SilentTremmor we think it actually may be IIS or some sort of Sever issue. Originally, we didn't think it could be, but after some digging it may be. It seems the instance of the client code is only allowing 7k of data to be downloaded. It doesn't matter what we try to download the result is always the same.
It turns out the API application was writing the file to a different instance of our application. The client code had no idea and was trying to download a file that did not exist. So, when the download link was creating the file it was empty, thus the small file size.