Magento install copied - admin menu doesn't work - javascript

I cloned an existing magento 1.7.2 installation on the same server with a test subdomain. The frontend seems to work, and I can login to the admin. The admin menu doesn't work however, no dropdowns, and copying url paths doesn't work either. I've searched online, and most answers date back to 2008 and suggest that it's a rights issue. So I've changed the rights of folders and files to 755 and 644, but still no working menus. The cache (var/cache) is empty.
These menus are javascript generated. The following error message is from the console:
Error: TypeError: Element.addClassName is not a function
To be clear - the solution is not in javascript, but it's something on the server. This install works on the same server in another directory with another domain.
Any ideas how to fix this?

The error
Error: TypeError: Element.addClassName is not a function
indicates some javascript on your page can't call the addClassName method.
The addClassName method is added to element via the prototype javascript framework.
That means its very likely your browser can't download the prototype.js file. Since it can't download this file, the addClassName method is never defined, and you get the error you're seeing.
Look at the source code of your admin pages and find the script tag that includes the version of prototpye shipped with your version of Magento.
<script type="text/javascript" src="http://magento.example.com/js/prototype/prototype.js"></script>
Take the URL from this script tag and load it in your browser.
My guess is you'll get a 404 because the file is missing, or a forbidden error because the file has incorrect permissions, or some other web server error that prevents the file from being shown. It's also possible that the link is pointing to an older domain name that's based on a value configured or cached in Magento.
Track down the source of that problem, and you'll be good to go.

Another reason could be that the skin and CSS rules are not correct for your environment.
I've just moved a site from live to local, and the skin/css/media were configured to a subdomain so I looked in the core_config_data table and updated the URLs

Please check if you have set merge js or css to yes, you can update this via db if you cant do it via menu:
SELECT * FROM core_config_data WHERE path LIKE 'dev%'
Change from 1 to 0 merge_css and merge_js

In my case I have changed the permissions of folder and its recurring files and folder and it started working. Try it once.

Related

wasm/dotnet Integrity attribute invalid for my Blazor app on Github pages

See the error on my website here
I have embedded a blazor app in my jekyll site. It runs perfectly locally, but when I publish it on github pages, I am getting this error:
Failed to find a valid digest in the 'integrity' attribute for resource 'https://chrisevans9629.github.io/blazor/xt/_framework/wasm/dotnet.3.2.0-rc1.20222.2.js' with computed SHA-256 integrity 'yVt8FYsTQDifOGsifIkmEXwe+7ML0jZ1dMi2xluiDXQ='. The resource has been blocked.
This is something that I think blazor generates when the page is ran. this is what my page looks like that starts blazor:
<script src="js/index.js"></script>
<app>Loading...</app>
Built with <3 using Blazor
<script src="_framework/blazor.webassembly.js"></script>
This is what the page looks like on github pages:
<script src="js/index.js"></script>
<app>Loading...</app>
<p>Built with <3 using Blazor
<script src="_framework/blazor.webassembly.js"></script></p>
<script type="text/javascript">var Module; window.__wasmmodulecallback__(); delete window.__wasmmodulecallback__;</script><script src="_framework/wasm/dotnet.3.2.0-rc1.20222.2.js" defer="" integrity="sha256-iZCHkFXJWYNxCUFwhj+4oqR4fkEJc5YGjfTTvdIuX84=" crossorigin="anonymous"></script></body>
Why is this error happening and how can I fix this? I've thought about create a script that would remove the integrity attribute, but I don't think that would be a good solution.
I found an answer here
Cause
Because I am using github pages to host my blazor app, it's using git to push up the code. Git by default will try to normalize line endings when committing code, which was causing the integrity of the blazor app to fail due to the files changing.
Solution
To fix this, I added a .gitattributes file to my blazor folder with * binary as the contents.
This tells git to treat all files as binary and therefore not to normalize the line endings. After I did that, I had to delete my _framework folder in my blazor app and rebuild it. After doing this, the blazor app worked.
In case someone else ends up here with the issue I had today..
I also got this error on a Blazor Wasm app locally after simple modification, and then still appeared after reverting changes.
The solution for me was to do a clean and rebuild.
In my case, it was a wrong target framework in the publish profile - I should not have selected win-x64.
I'm not sure of the exact reason, but the server interferes in some way with the response, based on the target framework. Just select browser-wasm and redeploy; it should be fine.
I spent too much time on this issue. Clean and Rebuild does not work for me.
What worked for me is deleting bin and obj folders from the Client(Blazor WASM) Project.
Environment
.Net 5 and 6
Visual Studio 2019 and 2022
Just to leave here a note on something I came across while trying to figure out what was going on.
If for some reason you removed the service worker from your app and the resources were actually cached in the common http cache, there is a possibility that once you re-enable the service worker you will get this error, because the service worker will pick up the http cached version and not the server's.
What I did was to add cache: "no-cache" to the Request's init.
So my onInstall now looks something like this
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash, cache: "no-cache" }));
// Also cache authentication configuration
assetsRequests.push(new Request('_configuration/TestApp.Client'));
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
It looks like hash generated inside ServiceWorkerAssetsManifest for all the files and on the client side don't match. It looks like ServiceWorkerAssetsManifest is not generating hash again when the file is modified, specially the static files.
Had the same problem today, in my case the error came with a css file.
The problem was that I had two versions of my application deployed to local folders.
At first I started the old version, closed it and then opened up the new version.
It seems that the old css file was cached in the browser which caused the error to appear.
The fix was simply pressing CTRL + U to open up the index.html file, clicking on the css file which caused the error and press F5 to reload the file. This solved the error for me.
A better solution!
Open service-worker.js
change
.map(asset => new Request(asset.url, { integrity: asset.hash }));
to :
.map(asset => new Request(asset.url));
Now it works!
I Had this same issue and none of these solutions worked for me, but they set me on the right path. I am deploying mine to my local machine and using IIS for testing purposes, and I found that in the publish profile that I have created in Visual Studio 2022, the check box to "Remove additional files at destination" was not checked and as soon as I checked this and republished it, everything worked fine. I must have removed a file that was being published in a previous build and it was still there since it wasn't being deleted by any subsequent builds/publishes. But this solved it for me, it might

Liferay 7 - Wildfly 10 and "X-Content-Type-Options:nosniff"

I'm struggeling around with Liferay 7 on Wildfly 10 and the according server configuration.
Calling my local installation serves me in the according response the following header:
...
X-Content-Type-Options: nosniff
...
Well, I really appreciate this normally, since it is an usefull security option against cross side scripting, but in conjunction with Liferay this causes the following error on the browser console:
Refused to execute script from 'http://localhost:8080/o/frontend-js-web/liferay/available_languages.jsp?bro…e&colorSchemeId=01&minifierType=js&languageId=de_DE&b=7002&t=1471516992592' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
The problem with this error is that this leads to a non-functionality in the cms-backend when you try to configure content or pages. The according menus cannot be opened anymore.
What happens: In the above jsp-file there is also javascript contained which will not be execuded because the above header will be respected by the browser and is blocked therefore since the served mime type (text/html) is not valid.
I investigated this problem first in Chrome and since yesterday and after an update also in the new version of Firefox.
I tried to find the according location within the configuration files of Liferay and Wildfly and to disable it, but without any success. No matter what I try the header will still be served by Wildfly.
In addition this header is only served when I open the according web page in Liferay. If I open the management console for Wildfly in the browser the header is not there anymore.
So I pretend its a concrete problem either of my Liferay installation or Liferay itself. Does anyone know which configuration file I have to adapt in order to disable the serving of this header?
Update
I think the non working JavaScript is a result because of the browser is blocking the execution of the contained JavaScript.
I opened the backend / control panel and moved to "Content". The error message is still there
Refused to execute script from 'http://localhost:8080/o/frontend-js-web/liferay/available_languages.jsp?br…e&colorSchemeId=01&minifierType=js&languageId=de_DE&b=7002&t=1471516992592' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
When I click now on an element it gives me the following error
everything.jsp?browserId=other&themeId=admin_WAR_admintheme&colorSchemeId=01&minifierType=js&minifi…:80165 Uncaught TypeError: Cannot read property 'de_DE' of undefined(…)
So no surprise the first error leads to a following issue in JavaScript since the language object is not set. But this leads to the fact that the according menu will not be opened.
In between I found the according property
...
http.header.secure.x.content.type.options
...
in the "system.properties" file as you described and I set it to "false" in my "portal-ext.properies" file. Afterwards I restarted the server but the header is still there.
Any ideas where I can switch this property off elsewhere? Maybe I should mention that it is only a development environment and later on in the productive environment I have to find another solution for this.
Does anyone know which configuration file I have to adapt in order to disable the serving of this header.
That is configured in system.properties. You can change it in system-ext.properties. If it doesn't exists, just create one next to portal-ext.properties.
PLEASE NOTE
I don't think this has anything to do with X-Content-Type-Options itself. This header was introduced in Liferay over 3 years ago and by now it's been used in many production environments (including such deployed on WildFly).
If you pay attention to the error message it actually says
because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
So for some reason the response MIME type is text/html and not application/javascript as expected. From the information you've provided it's impossible to tell why is this happening.
I had the same issue (with Jboss EAP 7 (~Wildfly 10) and LR 7.0 GA4 CE).
It seems that this is a (recurring?) bug in LR.
I would not turn off that HTTP header, I think it has the reason why it is there. Instead of that, I applied the suggested patch in the proper bundle and it worked for me (I don't know that this is recommended or not, but it works).
The bundle that contains the mentioned available_languages.jsp could be found among the OSGI bundles, in case of LR7 here:
$LIFERAY_HOME/osgi/state/org.eclipse.osgi/338/0/bundleFile
This is an OSGI jar, just open it, look for the jsp file in META-INF/resources/liferay folder, open it, put the line
<%# page language="java" contentType="text/javascript;charset=UTF-8" pageEncoding="UTF-8"%>
among the other directives and put it back to the bundle.
The jboss after this modification should be stopped, the $LIFERAY_HOME/work folder should be deleted (it may be work without this, I don't know, I did it). and start your jboss again.
After this step my Liferay instance worked well.
Good luck!
Update:
The solution above just a temporary one, because those folders belongs to the osgi runtime and these bundleFiles will be created only during the liferay's first run - and it seems from time to time liferay update it from the original location that is the lpkg files. So if you would like to solve this problem permanently, fix that jsp in the $LIFERAY_HOME/osgi/marketplace/Liferay CE Foundation.lpkg!com.liferay.frontend.js.web-1.0.41.jar - the bundleFile above will be create from this jar.
in Liferay 7, we may get this error if we are trying to include any .js files in .jsp page.
ex:
<script src="/o/module-name/js/jsfile.js" type="application/javascript"></script>
to solve the error,
go to the bnd.bnd file inside module-name/ and add the following.
Web-ContextPath: /module-name
It will work.
Thanks

Pimcore: Getting the White-Screen-of-Death after successful admin login

I've installed Pimcore on a VPS through Liquid Web. I loaded the sample data install which also uses the nightly build code. While everything installed fine, the public facing website appears fine and functions well, as does the login screen for the admin panel, once you log in, you see three black pulsing dots in the middle of a white screen, where eventually they disappear and you're simply left with a white screen.
Upon inspection of the error console, I'm seeing this error:
Failed to load resource: the server responded with a status of 404 (Not Found)
/website/var/tmp/minified_javascript_core_b18dd1d6984052da2ab5abc79f0c4a17.js?_dc=3704
Other scripts are also failing because this script isn't being loaded, so I'm fairly sure that once this script loads the others will work just fine.
When I try to directly access this JS file, I see this message:
HTTP/1.1 404 Not Found Filtered by error handler (static file exception)
I have verified that the file exists in the filesystem, so I know for sure that it's there, leading me to believe that the filesystem has that directory and/or file locked down. Permissions etc, are all set to their appropriate values.
Pimcore Version 4
It's been a few years and this project surfaced in our pipeline again. The actual cause for why this breaks was because we are also running the ModSecurity suite on our host. Accessing the interface .js file was triggering rule 2000009 where the pattern /var/tmp was being matched.
Possible solution (if you're using WHM/CPanel as we are):
Configure your /etc/apache2/conf.d/modsec2/whitelist.conf file to include the following rule (add more in the same place if needed).
<LocationMatch '/website'>
SecRuleRemoveById 2000009
</LocationMatch>
Be sure that you restart your HTTP service after making this update.
Enjoy!

What is the cookie path if I am using XAMPP?

I am trying to find the cookie that I created using Google Chrome and on the XAMPP server. I am using the Cookies.js plugin. I know the cookie is being created successfully because my website loads up the settings that I write into the cookie, but I can't find it in htdocs which I believe is the root folder. The cookie path is the default (root) path.
So after a bit of poking around, I finally found the cookie.
In Chrome, it's stored in either of the two paths below:
C:\Users\your username\AppData\Local\Google\Chrome\User Data\Default
C:\Users\your username\AppData\Local\Google\Chrome\User Data\Default\Local Storage
answer is here: https://superuser.com/questions/459426/where-does-chrome-store-its-cookie-file
or it can be accessed by typing the following in the URL bar
chrome://settings/cookies
In Firefox, it can be accessed at
C:\Users\user\AppData\Local\Mozilla\Firefox\Profiles\random characters
answer is also here: https://superuser.com/questions/387372/where-does-firefox-keep-cookies
I checked the paths myself so I am pretty sure that they are right. Correct me if I am wrong. I also noticed that the paths change between some versions of the same browser so beware of that too.
Cookies are not created in /htdocs folder, they are being held only in browser. For more information check
http://www.allaboutcookies.org/

How do I check if firebug is installed with javascript?

Here it is described how to check if Firebug is installed by checking if an image of firebug exists: http://webdevwonders.com/detecting-firefox-add-ons/
But it seems to be a bit outdated, cause the images he uses there don't exist anymore in firebug.
the firebug chrome.manifest looks like:
content firebug content/firebug/ contentaccessible=yes
...
but in the whole addon I only find one png now, and that is placed in the rootfolder of the addon. But some other content is accessible, for example: chrome://firebug/content/trace.js
Ho
So, in gerneral:
How do I make an image accessible that resides inside a Firefox SDK Addon?
I program an Addon and I want to make an image ok.png available to all javascripts in Firefox.
I added the image in the data folder and added a chrome.manifest
content response-timeout-24-hours data/
content response-timeout-24-hours data/ contentaccessible=yes
But no way to call it via a URL like
chrome://response-timeout-24-hours/data/ok.png
How do the paths belong together? which is relative to which?
I created a Bug report here.
So if you want to make your add-on detectable you need another approach:
you can use a PageMod to attach a content script that would wait for a
message from your web-app and "respond" by sending another message
back to your app. you would know that if you don't receive the
response, your addon is not installed. check out the documentation for
more details:
https://developer.mozilla.org/en-US/Add-ons/SDK/High-Level_APIs/page-mod
I used this to make my add-on detectable.

Categories