Some clients are failing to load the Google API in a production environment, but I'm not able to find anything wrong with my code.
Here's what I've got:
// Load Google's JavaScript Client API using requireJS !async plugin.
// You can learn more about the async plugin here: https://github.com/millermedeiros/requirejs-plugins/blob/master/src/async.js
define([
'async!https://apis.google.com/js/client.js!onload'
], function () {
'use strict';
console.log("googleAPI has loaded", window.gapi, window.gapi.client);
return window.gapi;
});
I've pulled this information from: Load async resource with requirejs timeout
The error message being displayed is:
Uncaught Error: Load timeout for modules:
async!https://apis.google.com/js/client.js!onload_unnormalized2,async!https://apis.google.com/js/client.js!onload
http://requirejs.org/docs/errors.html#timeout
This code doesn't produce any issues for me locally. It loads fine.
The first step I took to debug the issue was increasing the waitMinutes in requireConfig from 7 to 90. I thought maybe many people have very slow connections:
define(function () {
require.config({
baseUrl: 'js/',
enforceDefine: true,
// I'm seeing load timeouts on in googleAPI -- seeing if increasing wait time helps.
waitSeconds: 90,
...
});
});
This did not seem to affect the issue. I still see many clients reporting an issue.
What other debugging options are available to me in this scenario? Thanks
So I ran into a similar issue when trying debug my application in Internet Explorer 8, and maybe my experience is similar to your clients'.
For me, I found that I was receiving a timeout error because my test browser was having issues retrieving resources through https due to outdated root certificates. I do all of my Internet Explorer testing in a VM running windows 7. When I was trying to load my application I received the same error you mentioned in your post:
Uncaught Error: Load timeout for modules:
async!https://apis.google.com/js/client.js!onload_unnormalized2,async!https://apis.google.com/js/client.js!onload http://requirejs.org/docs/errors.html#timeout
I tried the url manually and was brought to a 'Untrusted Certificate' warning page on my browser. This confused me, but as I later found out the root certificates on my windows os were outdated and causing a failure with the https handshake. The async plug-in seems to consume the authentication exceptions and just sits waiting for something to magically go through leading to the timeout error.
The solutions I found to work are as follows:
Update the root certificates of the os
Change the security properties of my browser to not require certificate authentication
As for suggestions for further debugging, I have two ideas:
Try setting your browser's security properties as high as possible and see if you can recreate the issue (or just temporarily remove your local certificates)
Ask your clients to set their browser's security properties as low as possible, making sure to disable certificate authentication requirements, and ask them if they are still having problems. (If their problems go away, they might need to manually update their local certificates)
Related
I have this experiment which I only run on my local machine: I load an external webpage from, for example https://example.com and the with puppeteer I inject a javascript file which is served from http://localhost:5000.
So far there are no issues. But, this injected javascript file loads a WebAssembly file and then I get the following error
Uncaught (in promise) ReferenceError: SharedArrayBuffer is not defined
....
And indeed, SharedArrayBuffer is not defined (Chrome v96) with the result that my code is not working at all (It used to work though). So my question is, how can I solve this error?
Reading more about this, it seems that you can add two headers
res.setHeader('Cross-Origin-Opener-Policy', 'same-origin');
res.setHeader('Cross-Origin-Embedder-Policy', 'require-corp');
which I did for both files without much success. Maybe this will not work given that the page is from a different domain than the injected js and WASM files.
But maybe there is an other solution possible. Here is my command to start chrome
client.browser = await puppeteer.launch({
headless: false,
devtools: true,
defaultViewport: null,
executablePath: '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome',
args: [
'--debug-devtools',
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-web-security',
'--allow-running-insecure-content',
'--disable-notifications',
'--window-size=1920,1080'
]
//slowMo: 500
});
I know chrome has too many options, so maybe there is an option for this SharedArrayBuffer issue as well?
Hope someone knows how this works and can help me, Thnx a lot!
In this thread someone suggested to start chrome as follows
$> chrome --enable-features=SharedArrayBuffer
meaning I can add --enable-features=SharedArrayBuffer to my puppeteer config!
Peter Beverloo made an extensive list of Chromium command line switches on his blog a while back.
There are lots of command lines which can be used with the Google Chrome browser. Some change behavior of features, others are for debugging or experimenting. This page lists the available switches including their conditions and descriptions. Last automated update occurred on 2020-08-12.
See # https://peter.sh/experiments/chromium-command-line-switches/
If you're looking a specific command it will be there, give it a shot. Tho I'm pretty sure cross-origin restrictions were implemented specifically to prevent what you're trying to do.
I'm new here and having some issues on my Moodle site through which I provide online training. We upload SCORM packages to the Moodle and recently have had an issue which is stopping the SCORM packages from loading or sometimes just taking a very long time to load.
We receive the SCORM error that the "SCORM player has determined that your internet connection is unreliable or has been interrupted. If you continue in the SCORM activity, your progress may not be saved. You should exit the activity now and return when you have a dependable connection".
However, we have tried this from a number of different internet points and devices, with the same problem reoccurring. We therefore contacted our hosting provider, who replied:
"It appears the issue is coming from the fact that there are quite a few JavaScript errors on the site. I am pasting them below:
Failed to load resource: net::ERR_FAILED
chrome-extension://dliochdbjfkdbacpmhlcpmleaejidimm/cast_sender.js
Failed to load resource: net::ERR_FAILED
chrome-extension://enhhojjnijigcajfphajepfemndkmdlo/cast_sender.js
Failed to load resource: net::ERR_FAILED
chrome-extension://fmfcbgogabcbclcofgocippekhfcmgfj/cast_sender.js
Failed to load resource: net::ERR_FAILED
chrome-extension://pkedcjkdefgpdelpbcmbmeomcjbeemfm/cast_sender.js
Failed to load resource: net::ERR_FAILED
chrome-extension://fjhoaacokmgbjemoflkofnenfaiekifl/cast_sender.js
Failed to load resource: net::ERR_FAILED 4jquery.js:5 Uncaught
TypeError: Cannot read property 'scrollHeight' of null(anonymous
function)
# jquery.js:5x.extend.access
# jquery.js:3x.fn.(anonymous function)
# jquery.js:5e # content-script.js:1d
# content-script.js:1(anonymous function)
# content-script.js:1"
Can anyone assist and help me identify what the problem is that is causing the loading issues with my SCORM packages?
Kind regards
Eddie
I had the same issue with the message SCORM player has determined that your internet connection is unreliable or has been interrupted on all moodle servers, but the server is working fine and the work is saved correctly.
My solutions are two:
Set a bigger timeout (by default, moodle check internet connection with a 2 seconds timeout). You can set this parameter in 5, 7 or 10 seconds. You can set this value on lib/yui/src/checknet/js/checknet.js (search the request to a checknet.txt file)
Remove the checknet functionality. You can comment two lines on /mod/scorm/player.php. The lines you need to comment are these:
$PAGE->requires->string_for_js('networkdropped', 'mod_scorm');
$PAGE->requires->yui_module('moodle-core-checknet', 'M.core.checknet.init', array(array(
'message' => array('networkdropped', 'mod_scorm'),
)));
This is not a solution for a server that works wrong, this is a solution for a server that works fine but the time of the AJAX response is more than 2 seconds.
I don't believe that the first few (for cast_sender.js) are related to the issue at hand; cast_sender.js is a local script related to Chrome's ability to use Chromecast functionality. (Edit: The Google Cast SDK uses a rather "agricultural" method to detect if you're running Chrome with the appropriate extension; it's a known issue. Google chrome cast sender error if chrome cast extension is not installed or using incognito, https://code.google.com/p/google-cast-sdk/issues/detail?id=309)
The later lines they've pasted seem to bubble up from effectively a null pointer in whatever content_script.js is. Is the SCORM content locally produced? Do you know what software was used to create it - e.g. Articulate or Storyline? I presume that content_script.js is part of the player software?
I've been trying to solve the following problem for a few days now, and it's been driving me absolutely crazy.
I have a (1.2) meteor application, deployed at http://some.application.com:3000. It works great, and does what it is supposed to do. The application uses several packages, the ones that I think are related to this issue are autoupdate and the accounts package (which loads its own bunch of stuff).
Our directive is to turn this webapp into an android app, something we've been told meteor can do "quite easily". On the surface it seems like it's a simple case of meteor run android-device --mobile-server http://some.application.com:3000 --settings settings.json --verbose, however this doesn't do what I expect it to do.
Meteor decides to do the DDP connection on 10.0.2.2 (for whatever reason), and no matter what env variables I set I end up in the same situation.
It's important to note that the application has not been written using the DDP.connect(url) method anywhere [docs], so everything relies on the primary DDP connection (which I suspect might be one of the bigger causes of our problem).
For the record, here is my startup script. I got pretty desperate and added many, many env vars, and haven't had any luck for any combination thereof.
#!/bin/bash
export AWS_REGION=xxx
export AWS_BUCKET=xxx
export MONGO_URL=mongodb://some.application.com:27017/application
export QUEUE_ADDRESS=http://some.application.com
export AWS_ACCESS_KEY_ID=xxx
export AWS_SECRET_ACCESS_KEY=xxx
export ROOT_URL=http://some.application.com:3000
export DDP_DEFAULT_CONNECTION_URL=http://some.application.com:3000
export MOBILE_DDP_URL=http://some.application.com:3000
export MOBILE_ROOT_URL=http://some.application.com:3000
# Let's go
meteor run android-device --mobile-server http://some.application.com:3000 --settings settings.json --verbose
Running it locally, on mobile or desktop, (via localhost:3000 with port forwarding, or any other internal IP (10.x.x.x, 192.x.x.x) works absolutely fine. It even works with the remote AWS, Queue and DB.
According to all the documentation the --mobile-server switch should sort things out. It doesn't. I've tried it with and without an =, wrapped in quotes, all possible ways of defining it.
Looking at the <head> of my document I see the following code getting injected
__meteor_runtime_config__ = JSON.parse(decodeURIComponent("%7B%22meteorRelease%22%3A%22METEOR%401.2.0.2%22%2C%22PUBLIC_SETTINGS%22%3A%7B%22verifiedLogin%22%3Afalse%2C%22enableFacebookAuth%22%3Afalse%2C%22enableTwitterAuth%22%3Afalse%2C%22enableGoogleAuth%22%3Afalse%2C%22cdnUrlWithTrailingSlash%22%3A%22http%3A%2F%2Fdev.cdn.some.application.com%2F%22%2C%22ga%22%3A%7B%22id%22%3A%22UA-XXXXXX-1%22%7D%7D%2C%22ROOT_URL%22%3A%22http%3A%2F%2Flocalhost%3A3000%22%2C%22ROOT_URL_PATH_PREFIX%22%3A%22%22%2C%22appId%22%3A%228emj6c37j3fdoz5qmp%22%2C%22accountsConfigCalled%22%3Atrue%2C%22autoupdateVersion%22%3A%222b3acf7aa3ddef802ddf661d3b3986319aad5122%22%2C%22autoupdateVersionRefreshable%22%3A%22b00197cdb5345434d03d9a2503906349ff7854e2%22%2C%22autoupdateVersionCordova%22%3A%223644168d46bc4597d0b2d8c39e366890f6725f52%22%2C%22DDP_DEFAULT_CONNECTION_URL%22%3A%22http%3A%2F%2Flocalhost%3A3000%22%7D"));
if (/Android/i.test(navigator.userAgent)) {
// When Android app is emulated, it cannot connect to localhost,
// instead it should connect to 10.0.2.2
// (unless we're using an http proxy; then it works!)
if (!__meteor_runtime_config__.httpProxyPort) {
__meteor_runtime_config__.ROOT_URL = (__meteor_runtime_config__.ROOT_URL || '').replace(/localhost/i, '10.0.2.2');
__meteor_runtime_config__.DDP_DEFAULT_CONNECTION_URL = (__meteor_runtime_config__.DDP_DEFAULT_CONNECTION_URL || '').replace(/localhost/i, '10.0.2.2');
}
}
The UrlDecoded version of that string is as follows
{
"meteorRelease": "METEOR#1.2.0.2",
"PUBLIC_SETTINGS": {
"verifiedLogin": false,
"enableFacebookAuth": false,
"enableTwitterAuth": false,
"enableGoogleAuth": false,
"cdnUrlWithTrailingSlash": "http://dev.cdn.application.com/",
"ga": {
"id": "UA-XXXXXX-1"
}
},
"ROOT_URL": "http://localhost:3000",
"ROOT_URL_PATH_PREFIX": "",
"appId": "jfdjdjdjdjdjjdjdjdjjd",
"accountsConfigCalled": true,
"autoupdateVersion": "2b3acf7aa3ddef802ddf661d3b3986319aad5122",
"autoupdateVersionRefreshable": "b00197cdb5345434d03d9a2503906349ff7854e2",
"autoupdateVersionCordova": "3644168d46bc4597d0b2d8c39e366890f6725f52",
"DDP_DEFAULT_CONNECTION_URL": "http://localhost:3000"
}
This is strange because I have no entries of localhost anywhere.
Booting the app tells me: App running at: http://site.some.application.com, but no connections are made in the network inspector.
Grepping through the code shows me that the only places where __meteor_runtime_config__ is mentioned is in the autoupdate package.
Further investigation lead me to this issue #3815 which linked to this fix, but after I implemented it (the changes to the autoupdate package) I was still faced with the same problem (although hot code fixes stopped coming through from my local machine)
Even more investigation lead me to believe that the remote DDP server could be changed like this, but unfortunately this solution doesn't work with Cordova.
I tried settings HTTP_PROXY as the comment "unless we're behind a proxy" in the <head> script lead me to believe this might be a quick fix, but I didn't have any luck with this.
I tried removing the accounts package, but have not had any luck in this regard.
Main Question Is there any suggested way to allow a Cordova wrapped Meteor application to connect to an arbitrary server, and allow a DDP connection to same?
The accounts package is (most likely) needed. I suppose auto-updates aren't thaaat crucial, although they do help in terms of not having to regularly release code to the various app stores.
Things I've tried:
Removing accounts package
Remove autoupdates
Modifying autoupdates to point to remote DDP
Using the remote-ddp package
Forcing __meteor_runtime_config__ overrides
Setting a proxy
Environment variables
And several other thousand things
Related issues (Going back to Jan 2015) are:
How can DDP_DEFAULT_CONNECTION_URL be set? #3852 - Shows difficulty in connecting to remote meteor servers, and touches on how the autoupdate package affects things.
Dont' start local server when using option --mobile-server #3727 - This shows a case of the --mobile-server becoming 10.0.2.2
Meteor mobile build is not changing DDP_DEFAULT_CONNECTION_URL #4396 - This shows an apparent fix, but this doesn't work for me at all
Ability to pass an alternative DDP connection to autoupdate #3815 - This shows the confusion that comes from the official documentation, and lead me to the autoupdate package "fix" that I linked earlier
MOBILE_ROOT_URL and MOBILE_DDP_URL are ignored on meteor build #4581 - This shows how meteor build ignores these env vars
Can't build mobile app with different DDP server #4412 - This shows others having difficulty with the same problem, with the response asking for PRs around the issue
Meteor Accounts only authenticates DDP, not HTTP #3390 - This shows that auth via meteor-accounts can only happen via DDP, and not HTTP
Built apps cannot connect to the given --server: they keep failing to connect #3698 - This shows other users having the same issue on iOS, although they do report having success with connecting to a local server, which I also have success with, but there is no mention of success with a remote server. The fix appears to be deploying through meteor to some-app.meteor.com but this isn't an option for us.
Contents of .meteor/packages
aldeed:autoform#=4.2.2
aldeed:collection2#2.5.0
aldeed:simple-schema#1.3.3
aldeed:tabular#1.4.1
autoupdate#1.2.3
biasport:facebook-sdk#0.2.2
blaze#2.1.3
check#1.0.6
edgee:slingshot#0.7.1
iron:router#1.0.12
jquery#1.11.4
juliancwirko:s-alert#3.1.1
juliancwirko:s-alert-slide#3.1.0
lookback:seo#1.1.0
matteodem:easy-search#1.6.4
meteor#1.1.9
meteorhacks:fast-render#2.10.0
meteorhacks:subs-manager#1.6.2
mobile-experience#1.0.1
momentjs:moment#2.10.6
mquandalle:jade#0.4.4
multiply:iron-router-progress#1.0.2
---
internal packages (one of which includes accounts)
---
reactive-dict#1.1.2
reactive-var#1.0.6
reywood:iron-router-ga#0.7.1
session#1.1.1
standard-minifiers#1.0.1
templating#1.1.4
tracker#1.0.9
underscore#1.0.4
underscorestring:underscore.string#3.2.2
utilities:avatar#0.9.2
I can provide the contents of my versions file if you feel that will help.
TL;DR - Is there any suggested way to allow a Cordova wrapped Meteor application to connect to an arbitrary server, and allow a DDP connection to same?
Any help or pointers around this issue would be much appreciated. Please let me know if there is any other information you may need to assist in this regard.
Many thanks
Issue On Github
I have a weird problem where after around 30+ seconds after a local html page has finished loading, calling the google AutocompleteService or PlacesService javascript function does not send out a request to do the lookup. But if less than 30 seconds it works fine. I can even do multiple successful calls if within that time.
Background:
We have a C# dll that is used by a VB6 app that runs as a Windows service. The dll is used to do autocomplete and other places API lookups through the Javascript API.
Due to the asynchronous nature of the google lookups, the WebBrowser control lives in its own thread with its own message loop eg.
thrd = new Thread(new ThreadStart(
delegate
{
Init(false);
System.Windows.Forms.Application.Run(this);
}));
// set thread to STA state before starting
thrd.SetApartmentState(ApartmentState.STA);
thrd.Start();
This is all setup once as part of the startup process of the service.
This dll works fine elsewhere which includes apps running in IIS and on the desktop.
Troubleshooting:
I confirmed that the browser thread stays alive on subsequent lookup calls from the windows service.
Through debugging, i can see that the javascript function (below) is being run (i can see the debug outputs) with no errors thrown. However, the google call, autocomplete.getPlacePredictions, does not send anything out (our network guy was monitoring the traffic while i did the lookup)
Example of the javascript function:
function doAutoComplete(waitKey, searchString, latBias, longBias, radiusBias, components, typesFilter) {
//Removed irrelevant code to keep it brief
//debug output here
autocomplete.getPlacePredictions(options, function (waitKey) {
return (function (predictions, status) {
window.external.setResult(waitKey, status, JSON.stringify(predictions));
});
} (waitKey));
//debug output here
}
The service does not have the issue when installed to a windows 7 machine. So at this stage the issue only happens in a Windows 2012 Server machine.
I have run out of ideas as to what could be causing the google call to not work.
Any help or ideas will be greatly appreciated.
Edit history:
Added #3 to Troubleshooting.
I have sorted it out.
The issue was that the IE enchanced security configuration (IE ESC) was internally blocking/disabling the call to Google. Turning it off prevents the issue from happening but i will look further into tweaking it rather than turning it off.
What is annoying is that it was doing it silently so was hard to track down. I believe the issue only happened when running as a windows service because of the different security levels used for it versus running from the desktop or IIS.
I'm testing my project in the production server where I'm getting several errors of various features in my web application which is working perfectly on my computer.
Please go to http://qlimp.com and login using this username/password: nirmal/karurkarur Then go to http://qlimp.com/cover You'll find a palette where you can upload images and do something similar to flavors.me. I'm having several problems here(images,text,other information are not getting stored in the database).
I think there is no problem with the setup. The problem is it is not even entering into the Django views properly but working without any problem on my computer. Is there anyone experienced the same problem? I'm wondering why is it not working.
Also you can check out in http://qlimp.com/signup/ and you can find the problem where the datas are not get stored.
So there are many problems which I can't ask in one question(not a stackoverflow culture) and so I'm asking this.
When I upload the image I checked in chrome inspector 'network tab' it shows 502 bad gateway
Here is my Django views.py: https://gist.github.com/2778242
jQuery Code for the ajax image upload:
$('#id_tmpbg').live('change', function()
{
$("#ajax-loader").show();
$("#uploadform").ajaxForm({success: showResponse}).submit();
});
function showResponse(responseText, statusText, xhr, $form) {
$.backstretch(responseText)
$("#ajax-loader").hide();
}
And I also checked that it is actually entering into the request.is_ajax() but not into form.is_valid() in my views. Why is it so? I'm uploading the right format.
Could anyone identify the mistake I've done? Also I need an answer on Why the code is not working on production server which is actually working on the development server (this would be helpful for me to solve rest of the problems).
Development server: Ubuntu 11.10/Python 2.7/Django 1.3.1
Production server: Ubuntu 12.04/Python 2.7/Django 1.3.1
UPDATE
There is some problem in everyone signing in with the same user/password. So please register there and it shows [Errno 111] Connection refused, doesn't matter, you can login then.
UPDATE-2
Actually the problem is with form.is_valid() so I removed it and checked but now I'm getting this error:
Exception Type: ValueError
Exception Value: The BackgroundModel could not be created because the data didn't validate.
Exception Location: /home/nirmal/project/local/lib/python2.7/site-packages/django/forms/models.py in save_instance, line 73
I'm all-time uploading the right Image format and I don't know why it is not validating.
UPDATE-3
I'm getting 304 Not Modified for all the static files in http://qlimp.com/cover Will this be a problem for not working?
It's Nginx that gives the 502 error when gunicorn is not available.
gunicorn_django -bind=127.0.0.1:8001 only launches one synchronous worker process and it may be busy responding to other requests.
You may want to spawn more workers (-w2). If you need to handle big data transfers consider using an asynchronous worker flavor (e.g. -k gevent, you need gevent to be installed).
More info on choosing the worker class and the number of workers in Gunicorn FAQ.
I've found the problem which was stucking me for the past 3 days. It is because I've forget to do this sudo apt-get install libjpeg62 libjpeg62-dev zlib1g-dev before the PIL installation, that is why the image is not get validated.
The next issue is I've given a relative path for the MEDIA_ROOT in my settings.py file which leads to 404 NOT FOUND and I changed it to absolute path.
So these are simple mistakes which leads to some mysterious errors. Also Thanks to everyone for the help.