I'm new to Angular and experimenting a bit. I have a REST API running locally (Express app) and I have Angular code in the public folder that consumes the REST API. All code for server and client are running from one codebase and everything works fine, I can add, update, delete items in a Mongo db that is running locally (code is here)
The idea is however to run the server API on Heroku and to split off the Angular code so that I have all HTML and JS file in a complete separate folder, not being part of any Express app. The server part is working good as I'm using Postman to add, update and delete items. When using the Angular UI, it seems the Angular code is not executed (cfr Screenshot)
The code for this is here.
Any thoughts on why running this HTML code does not seem to work? Enabling or disabling CORS in the Chrome browser does not help.
It looks like there is a special character at the bottom of your controller.js.
controllers/controller.js:
$scope.updateTodo = function() {
console.log("Completed" + $scope.todo.completed);
$http.put(url + $scope.todo._id, $scope.todo).success(function(response) {
console.log("new updated: " + response.updated_at);
refresh();
})
};
}]); <------- See invalid characters
I pulled the code locally from your GitHub repository, removed the special character, and it ran as expected. Hope this helps!
Related
See the error on my website here
I have embedded a blazor app in my jekyll site. It runs perfectly locally, but when I publish it on github pages, I am getting this error:
Failed to find a valid digest in the 'integrity' attribute for resource 'https://chrisevans9629.github.io/blazor/xt/_framework/wasm/dotnet.3.2.0-rc1.20222.2.js' with computed SHA-256 integrity 'yVt8FYsTQDifOGsifIkmEXwe+7ML0jZ1dMi2xluiDXQ='. The resource has been blocked.
This is something that I think blazor generates when the page is ran. this is what my page looks like that starts blazor:
<script src="js/index.js"></script>
<app>Loading...</app>
Built with <3 using Blazor
<script src="_framework/blazor.webassembly.js"></script>
This is what the page looks like on github pages:
<script src="js/index.js"></script>
<app>Loading...</app>
<p>Built with <3 using Blazor
<script src="_framework/blazor.webassembly.js"></script></p>
<script type="text/javascript">var Module; window.__wasmmodulecallback__(); delete window.__wasmmodulecallback__;</script><script src="_framework/wasm/dotnet.3.2.0-rc1.20222.2.js" defer="" integrity="sha256-iZCHkFXJWYNxCUFwhj+4oqR4fkEJc5YGjfTTvdIuX84=" crossorigin="anonymous"></script></body>
Why is this error happening and how can I fix this? I've thought about create a script that would remove the integrity attribute, but I don't think that would be a good solution.
I found an answer here
Cause
Because I am using github pages to host my blazor app, it's using git to push up the code. Git by default will try to normalize line endings when committing code, which was causing the integrity of the blazor app to fail due to the files changing.
Solution
To fix this, I added a .gitattributes file to my blazor folder with * binary as the contents.
This tells git to treat all files as binary and therefore not to normalize the line endings. After I did that, I had to delete my _framework folder in my blazor app and rebuild it. After doing this, the blazor app worked.
In case someone else ends up here with the issue I had today..
I also got this error on a Blazor Wasm app locally after simple modification, and then still appeared after reverting changes.
The solution for me was to do a clean and rebuild.
In my case, it was a wrong target framework in the publish profile - I should not have selected win-x64.
I'm not sure of the exact reason, but the server interferes in some way with the response, based on the target framework. Just select browser-wasm and redeploy; it should be fine.
I spent too much time on this issue. Clean and Rebuild does not work for me.
What worked for me is deleting bin and obj folders from the Client(Blazor WASM) Project.
Environment
.Net 5 and 6
Visual Studio 2019 and 2022
Just to leave here a note on something I came across while trying to figure out what was going on.
If for some reason you removed the service worker from your app and the resources were actually cached in the common http cache, there is a possibility that once you re-enable the service worker you will get this error, because the service worker will pick up the http cached version and not the server's.
What I did was to add cache: "no-cache" to the Request's init.
So my onInstall now looks something like this
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash, cache: "no-cache" }));
// Also cache authentication configuration
assetsRequests.push(new Request('_configuration/TestApp.Client'));
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
It looks like hash generated inside ServiceWorkerAssetsManifest for all the files and on the client side don't match. It looks like ServiceWorkerAssetsManifest is not generating hash again when the file is modified, specially the static files.
Had the same problem today, in my case the error came with a css file.
The problem was that I had two versions of my application deployed to local folders.
At first I started the old version, closed it and then opened up the new version.
It seems that the old css file was cached in the browser which caused the error to appear.
The fix was simply pressing CTRL + U to open up the index.html file, clicking on the css file which caused the error and press F5 to reload the file. This solved the error for me.
A better solution!
Open service-worker.js
change
.map(asset => new Request(asset.url, { integrity: asset.hash }));
to :
.map(asset => new Request(asset.url));
Now it works!
I Had this same issue and none of these solutions worked for me, but they set me on the right path. I am deploying mine to my local machine and using IIS for testing purposes, and I found that in the publish profile that I have created in Visual Studio 2022, the check box to "Remove additional files at destination" was not checked and as soon as I checked this and republished it, everything worked fine. I must have removed a file that was being published in a previous build and it was still there since it wasn't being deleted by any subsequent builds/publishes. But this solved it for me, it might
I'm trying to familiarize myself with the concept of using script tags. I'm making a ruby on rails app that does something as simple as alert "Hi" when a customer visits a page. I am testing this public app on a local server and I have the shopify_app gem installed. The app has been authenticated and I have access to the store's data. I've viewed the Shopify API documentation on using script tags and I've looked at the Shopify Embedded App example that Shopify has on GitHub. The documentation details the properties of a script tag and gives examples of script tags with their properties defined, but doesn't say anything about where to place the script tag in an application, or how to configure an environment so that the js file in the script tag will go through.
I've discovered that a js file being added with a script tag will only work if the js file is hosted online, so I've uploaded the js file to google drive. I have the code for the script tag in the index action of my HomeController (the default page for the app). This is the code I'm using:
def index
if response = request.env['omniauth.auth']
sess = ShopifyAPI::Session.new(params[:shop], response[:credentials][:token])
session[:shopify] = sess
ShopifyAPI::Base.activate_session(sess)
ShopifyAPI::ScriptTag.create(
:event => "onload",
:src => "https://drive.google.com/..."
)
end
I think the problem may be tied to the request.env. The response is not being read as request.env[omniauth.auth] and I believe that the response coming back as valid may be required for the script tag to go through.
The method that I tried above is from the 2nd answer given in this topic: How to develop rails app for shopify with ScriptTags.
The first answer suggested using this code:
ShopifyAPI::Base.site = token
s = ShopifyAPI::ScriptTag.create(:events => "onload",:src => "your javascript url")
However, it doesn't say where to place both lines of code in a rails application. I tried putting the second line in a js file in my rails application, but it did not work.
I don't know if I'm encountering problems because I'm running the app on a local server or if there is something missing from the configuration of my application.
I'd appreciate it if anyone could point me in the right direction.
Try putting something like this in config/initializers/shopify_app.rb
ShopifyApp.configure do |config|
config.api_key = "xxx-xxxx-xxx-xxx"
config.secret = "xxx-xxxx-xxx-xxx"
config.scope = "read_orders, read_products"
config.embedded_app = true
config.scripttags = [
{event:'onload', src: 'https://yourdomain.herokuapp.com/javascripts/yourjs.js'}
]
end
Yes, you are correct that you'll need the js file you want to include for your script tag publicly available - if you are using localhost for development look into ngrok.
Do yourself the favor of ensuring your callbacks use SSL when interacting with the Shopify API (i.e. configure your app with https://localhost/ as a callback setting in the Shopify app settings). I went through the trouble of configuring thin as the web server locally with a self-signed SSL certificate.
With a proper set up you should be able to debug why the response is failing the omniauth check.
I'm new to the Shopify API(s), but not Rails. Their documentation leaves a lot to be desired.
Good luck to you sir,
I have a pdf that is rendered from a server side html file in my Meteor application using webshot. This pdf is displayed in the browser, and also attached to an email to be sent to various users. Since migrating over to Meteor's Galaxy platform, I am unable to render the images in the html file, and the email attachment doesn't work correctly. My set up worked perfectly on Digital Ocean with Ubuntu 14.04, and also on my localhost. It still works perfectly at both of these environments, but doesn't work with Galaxy. (it's worth noting I don't know much about programming email attachments, but used Meteor's email package, which is based on mailcomposer)
The pdf renders, so I know phantomjs is working, and webshot is taking a screenshot and displaying it as a pdf, so I know webshot is working. However, the images won't render and when attaching to an email, the file is corrupted/doesn't send correctly. I have tried logging the html to ensure that the URLs to the image files are all correct, and they are when deployed to Galaxy, but they just won't render with phantomjs/webshot. I am using the meteorhacks:ssr package to render the html file on the server before reading it with phantomjs.
I've tried contacting Galaxy support about this, but haven't had much assistance. Has anyone else experienced this? I'm struggling to even pinpoint the package causing the issue to submit a pull request if I need to. Thanks!
So I figured out my problem, which I'll share with others, but I'll also share a few pointers on debugging webshot in an app running on Galaxy's servers.
First, webshot doesn't pipe errors to the Galaxy's logs by default, as it's running on a spawned node.js process, so you need to change this line in your 'project_path/.meteor/local/isopacks/npm-container/npm/node_modules/webshot/lib/webshot.js' file (note, I'm still on Meteor 1.2, so this is wherever your npm webshot package is located):
// webshot.js line 201 - add , {stdio: "inherit"} to spawn method
var phantomProc = crossSpawn.spawn(options.phantomPath, phantomArgs, {stdio: "inherit"});
This passes all logs from the spawned process to your console. In addition to this, comment out the following code in the same file:
// comment out lines 234-239
// phantomProc.stderr.on('data', function(data) {
// if (options.errorIfJSException) {
// calledCallback = true;
// clearTimeout(timeoutID);
// cb(new Error('' + data))
// }
// });
Doing these two modifications will print logs from the phantomjs process to your Galaxy container. In addition to that, you will want to modify the webshot.phantom.js script that is located in the same directory to print to the console in order to debug. This is the script you will want to modify however you see fit to find your issue, but the phantomjs docs recommend using phantom callbacks to debug errors from the web page being loaded, such as:
page.onResourceError = function(resourceError) {
console.log('Unable to load resource (#' + resourceError.id + 'URL:' + resourceError.url + ')');
console.log('Error code: ' + resourceError.errorCode + '. Description: ' + resourceError.errorString);
};
For my particular issue, I was getting an SSL handshake issue:
Error code: 6. Description: SSL handshake failed
To fix this, I had to add the following code to my webshot options object:
phantomConfig: {
"ignore-ssl-errors": "true",
"ssl-protocol": "any"
},
This fixed the issue with loading the static images in my pdf over https (note: this worked correctly on Digital Ocean without the code above, I'm not sure what is different in the SSL configuration on Galaxy's containers).
In addition, I was having issues with attaching the pdf correctly to an email my app sent. This turned out to be an issue with rendering the url correctly for email using Meteor.absoluteUrl() in the mailcomposer attachments filePath object. I don't know why Meteor.absoluteUrl() did not render my app's url correctly on Galaxy in an email attachment, as Meteor.absoluteUrl() works in other places in my app, and it worked on Digital Ocean, but it didn't work here. When I switched the attachment object over to a hard coded URL, it worked fine, so that might be something worth checking if you are having issues.
I know quite a few Meteor developers have used webshot to create pdf's in their app, and I'm sure some will be migrating to Galaxy in the future, so hopefully this is helpful to others who decide to switch to Galaxy. Good luck!
I want to fetch some information from a website using the phantomjs/casperjs libraries, as I'm looking for the HTML result after all javascripts on a site are run. I worked it out with the following code from this answer:
var page = require('webpage').create();
page.open('http://www.scorespro.com/basketball/', function (status) {
if (status !== 'success') {
console.log('Unable to access network');
} else {
var p = page.evaluate(function () {
return document.getElementsByTagName('html')[0].innerHTML
});
console.log(p);
}
phantom.exit();
});
And I also worked it out to get phantomjs/casperjs running on heroku by following these instructions, so when I now say heroku run phantomjs theScriptAbove.js on OS X terminal I get the HTML of the given basketball scores website as I was expecting.
But what I actually want is to get the html text from within a Mac desktop application, this is the reason why I was looking for a way to run the scripts on a web server like heroku. So, my question is:
Is there any way to get the HTML text (that my script prints as a result) remotely within my Objective-C desktop application?
Or asked in another way: how can I run and get the answer of my script remotely by using POST/GET?
p.s.
I can handle with Rails applications, so if there's a way to do this using Rails - I just need the basic idea of what I have to do and how to get the phantomjs script to communicate with Rails. But I think there might be an even simpler solution ...
If I understand you correctly you're talking about interprocess communication - so that Phantom's result (the page HTML) can somehow be retrieved by the app.
per the phantom docs, couple options:
write the HTML to a file and pick up the file in your app
run the webserver module and do a GET to phantom, and have the phantom script respond with the page HTML
see http://phantomjs.org/api/webserver/
I'm working on a maintainance phase of a website and I'm getting problems with it.
There's an issue which relates to Javascript trying to call a WCF Service that throw an error on javascript 'Uncaught ReferenceError'
Here is the scope of that script:
$(document).ready(function() {
Utility.blockUI();
AwmsUI.Actions.page_id = Utility.UrlParam("pid");
AwmsUI.Actions.mode = Utility.UrlParam("mode");
wcf.wmsService.GetAllOnlineComponentType(AwmsUI.Actions.page_id, AwmsUI.Actions.newComponentType);
That's just a part of the whole long function.
The error occurs at the last line where it should call the service 'wmsService' in namespace 'wcf'.
[ServiceContract(Namespace = "wcf")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class wmsService
{ blah blah ... }
I've checked the wcf service which hosted in my local iis and it seems not found (display blank).
I've checked the wcf service which hosted in customer's dev env and it displays 'Endpoint not found'.
The page is running just fine on customer's dev env but it stucks on my site.
I've did get latest sourcecode and compared with the responsity there to make sure no changes were made.
I think I must have made mistakes somewhere in configuration or something but I have no idea what should I correct.
Could you guys please help me out?
I'm running on IIS 7 using AppPool Classic 2.0
Thanks & Regards
Hoang
I've managed to figure out the issue myself
The solution was:
Run cmd.exec and execute:
C:\Windows\Microsoft.NET\Framework64\v3.0\Windows Communication Foundation\servicemodelreg.exe -i
or on 32 bit:
C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation>ServiceModelReg.exe -i