How can I create a "tmp" directory with Elastic Beanstalk? - javascript

I'm using Node.js and need to save files to a tmp directory within my app. The problem is that Elastic Beanstalk does not set the app directory to be writable by the app. So when I try to create the temp directory I get this error
fs.js:653
return binding.mkdir(pathModule._makeLong(path),
^
Error: EACCES, permission denied '/var/app/tmp/'
at Object.fs.mkdirSync (fs.js:653:18)
at Promise.<anonymous> (/var/app/current/routes/auth.js:116:18)
at Promise.<anonymous> (/var/app/current/node_modules/mongoose/node_modules/mpromise/lib/promise.js:177:8)
at Promise.emit (events.js:95:17)
at Promise.emit (/var/app/current/node_modules/mongoose/node_modules/mpromise/lib/promise.js:84:38)
at Promise.fulfill (/var/app/current/node_modules/mongoose/node_modules/mpromise/lib/promise.js:97:20)
at /var/app/current/node_modules/mongoose/lib/query.js:1394:13
at model.Document.init (/var/app/current/node_modules/mongoose/lib/document.js:250:11)
at completeOne (/var/app/current/node_modules/mongoose/lib/query.js:1392:10)
at Object.cb (/var/app/current/node_modules/mongoose/lib/query.js:1151:11)
I've tried several things such as an app-setup.sh script within .ebextensions/scripts/app-setup.sh that looks like this
#!/bin/bash
# Check if this is the very first time that this script is running
if ([ ! -f /root/.not-a-new-instance.txt ]) then
newEC2Instance=true
fi
# Get the directory of 'this' script
dirCurScript=$(dirname "${BASH_SOURCE[0]}")
# Fix the line endings of all files
find $dirCurScript/../../ -type f | xargs dos2unix -q -k
# Get the app configuration environment variables
source $dirCurScript/../../copy-to-slash/root/.elastic-beanstalk-app
export ELASTICBEANSTALK_APP_DIR="/$ELASTICBEANSTALK_APP_NAME"
appName="$ELASTICBEANSTALK_APP_NAME"
dirApp="$ELASTICBEANSTALK_APP_DIR"
dirAppExt="$ELASTICBEANSTALK_APP_DIR/.ebextensions"
dirAppTmp="$ELASTICBEANSTALK_APP_DIR/tmp"
dirAppData="$dirAppExt/data"
dirAppScript="$dirAppExt/scripts"
# Create tmp directory
mkdir -p $dirApp/tmp
# Set permissions
chmod 777 $dirApp
chmod 777 $dirApp/tmp
# Ensuring all the required environment settings after all the above setup
if ([ -f ~/.bash_profile ]) then
source ~/.bash_profile
fi
# If new instance, now it is not new anymore
if ([ $newEC2Instance ]) then
echo -n "" > /root/.not-a-new-instance.txt
fi
# Print the finish time of this script
echo $(date)
# Always successful exit so that beanstalk does not stop creating the environment
exit 0
As well as creating a file called 02_env.config within .ebextensions that looks like this
# .ebextensions/99datadog.config
container_commands:
01mkdir:
command: "mkdir /var/app/tmp"
02chmod:
command: "chmod 777 /var/app/tmp"
Neither seem to work. How can I create a tmp directory within my app that is writable?

I recently experienced the same issue with a .NET application where the application was failing because it couldn't write to a directory, even after I had set the permissions.
What I found was that after the whole .ebextensions process was completed, the final step was a web container permissions update which ended up overwriting my ebextensions permissions change.
To solve it I moved the directory outside of the web container and updated the application to write there instead.
In your case I would suggest /tmp

With the newer (current?) Amazon Linux 2 elastic beanstalk installs, setting up a Post Deploy hook is the way to make this happen. The tmp folder needs to be created and made writeable AFTER elastic beanstalk has moved the newly deployed app bundle to /var/app. It's just a shell script placed in the following location from the root of your app:
.platform/hooks/postdeploy/10_create_tmp_and_make_writeable.sh
#!/bin/bash
mkdir /var/app/current/tmp
chmod 777 /var/app/current/tmp

Related

AWS Lambda read-only file system error failed to create directory with Docker image

Problem
Docker image compiles successfully, however fails when ran from Lambda because of its read only file system.
Summary
Luminati-proxy has a docker integration for their proxy manager. I copied over their docker file and appended it to my own docker file for pushing out a script to AWS Lambda. The building of the docker image was successful, but when pushed off to Lambda, it failed because of a read only file system error:
Failed to create directory /home/sbx_user1051/proxy_manager: [code=EROFS] Error: EROFS: read-only file system, mkdir '/home/sbx_user1051'
2022-02-28 19:37:22.049 FILE (8): Failed to create directory /home/sbx_user1051/proxy_manager: [code=EROFS] Error: EROFS: read-only file system, mkdir '/home/sbx_user1051'
Analysis
Upon examining the trackback, the error is focused on the proxy_manager installation and fails with directory changes (mkdir, mk_work_dir ...). These changes are made within the .js files of the GitHub which is pulled from the docker file as the proxy_manager installation. Obviously the only mutable directory on Lambda is the /tmp directory, but is there a workaround for getting this set up without resorting to putting everything under the /tmp directory as it wipes itself upon runtime? Reinstalling a proxy_manager each run is not at all ideal...
Answer?
Could this be as simple as setting environment stipulations such as:
ENV PATH=...
ENV LD_LIBRARY_PATH=...
If so, I how should they be configured? I am adding the docker file below for quick reference:
FROM node:14.18.1
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
USER root
RUN npm config set user root
RUN npm install -g npm#8.1.3
RUN npm install -g #luminati-io/luminati-proxy
ENV DOCKER 1
CMD ["luminati", "--help"]
I appreciate the insight!
TL;DR:
You should instead leverage an S3 bucket to store, read and modify any file. All lambdas and microservices. In general, should always be treated as stateless
All Luminati-proxy functionality comes prebuilt within amazon lambdas and API Gateway
Lambda functions are not meant to run long-running processes as they are limited to 15 minutes maximum so the design of the container that you are trying to run in lambdas has to have AWS serverless architecture considerations in its design
Explanation:
According to the documentation of AWS
Lambda functions:
The container image must be able to run on a read-only file system. Your function code can access a writable /tmp directory with 512 MB of storage.
Since containers based on Linux based images are already supposed to have a folder called /tmp you should pretty much be able to access that folder any time from your code to read( remember, read-only FS)
If you are looking to store content amazon's solution for that is for you to have any content created and manage over an S3 bucket, buckets are as easy to use as if you read a file locally but will remain accessible after the lambda instance finishes the workload
Please refer to Read file from aws s3 bucket using node fs and Upload a file to Amazon S3 with NodeJS for more details on how to use an S3 bucket. There are plenty of ways to achieve it regardless of the language been used.
This is all based on a best practice promoted by AWS over their platform. Where lambdas remain stateless
AWS Lambda provides /tmp folder for users to write files on lambda, as I don't about your question context but hope this help.
You can write files to AWS Lambda at /tmp folder
eg. I want to create a file demo.txt at runtime/programmatically using AWS lambda, then i can write the file to /txt/demo.txt

AWS Amplify JavaScript Gitpod automatic setup

I'm trying to set up an AWS Amplify JavaScript project with Gitpod in a way that when I start a new Workspace I don't have to manually go through the amplify-cli steps (adding IAM user, generating aws-exports.js file, etc.).
I've managed to successfully install the aws-cli and amplify-cli on the machine so far (I'm adding this to my .gitpod.yml file on task init)
$ npm install #aws-amplify/cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
so I can add the
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2
environment variables to gitpod variables, but when running for example amplify pull I don't see the [default] user as I normally would when running it with local setup.
I've got it working, first I've added these environment variables for the amplify setup using the Gitpod account settings:
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION=us-west-2
AWS_AMPLIFY_PROJECT_NAME=headlessProjectName
AWS_AMPLIFY_APP_ID=amplifyServiceProjectAppId
The first three are the IAM user credentials and config, the latter two are amplify specific and can be found inside the AWS console on the Amplify project.
After that, I've created a Dockerfile for the Gitpod Custom Docker Image (as suggested by #Pauline) and a bash script that creates ~/.aws config files and runs amplify pull in a headless mode.
.gitpod.dockerfile
FROM gitpod/workspace-full
# install aws-cli v2
RUN sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& sudo ./aws/install
# install amplify-cli
RUN sudo curl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELL
This will preinstall aws-cli and amplify-cli on the docker image so they are ready to use inside the workspace. Also don't forget to add the docker configuration to the top of .gitpod.yml file:
.gitpod.yml
image:
file: .gitpod.Dockerfile
At this point, I'm setting up Amplify in a way that I don't have to manually pick the amplify-cli options when a new workspace is started. The magic happens inside a custom bash script with the help of the environment variables specified at the start:
amplify-pull.bash
#!/bin/bash
set -e
IFS='|'
# Specify the headless amplify pull parameters
# https://docs.amplify.aws/cli/usage/headless/#amplify-pull-parameters
VUECONFIG="{\
\"SourceDir\":\"src\",\
\"DistributionDir\":\"dist\",\
\"BuildCommand\":\"npm run-script build\",\
\"StartCommand\":\"npm run-script serve\"\
}"
AWSCLOUDFORMATIONCONFIG="{\
\"configLevel\":\"project\",\
\"useProfile\":false,\
\"profileName\":\"default\",\
\"accessKeyId\":\"$AWS_ACCESS_KEY_ID\",\
\"secretAccessKey\":\"$AWS_SECRET_ACCESS_KEY\",\
\"region\":\"$AWS_DEFAULT_REGION\"\
}"
AMPLIFY="{\
\"projectName\":\"$AWS_AMPLIFY_PROJECT_NAME\",\
\"appId\":\"$AWS_AMPLIFY_APP_ID\",\
\"envName\":\"dev\",\
\"defaultEditor\":\"code\"\
}"
FRONTEND="{\
\"frontend\":\"javascript\",\
\"framework\":\"vue\",\
\"config\":$VUECONFIG\
}"
PROVIDERS="{\
\"awscloudformation\":$AWSCLOUDFORMATIONCONFIG\
}"
# Create AWS credential file inside ~/.aws
mkdir -p ~/.aws \
&& echo -e "[default]\naws_access_key_id=$AWS_ACCESS_KEY_ID\naws_secret_access_key=$AWS_SECRET_ACCESS_KEY" \
>> ~/.aws/credentials
# Create AWS config file ~/.aws
echo -e "[default]\nregion=$AWS_DEFAULT_REGION" >> ~/.aws/config
# Run amplify pull in Headless mode,
# this also generates thw aws-exports.js file inside /src
amplify pull \
--amplify $AMPLIFY \
--frontend $FRONTEND \
--providers $PROVIDERS \
--yes
I'm using Vue for my frontend as an example, so those values need to be changed depending on the project type. The rest of the params are pretty straightforward, more info about the headless mode can be found here. I'm also creating the aws config files before the amplify pull command as mentioned before.
And this is what the final .gitpod.yml file looks like
.gitpod.yml
image:
file: .gitpod.dockerfile
tasks:
- name: Amplify pull dev
command: |
bash amplify-pull.bash
gp sync-done amplify
- name: Install npm packages, run dev
init: yarn install
command: |
gp sync-await amplify
yarn serve
ports:
- port: 8080
onOpen: open-preview
I'm also waiting for the Amplify pull to finish before running the dev server using gp sync.

Docker: Why Do Variables Passed From .env Linger in the Container Even After They Are Removed From .env and Rebuilt

I'm trying to understand why env variables inside my Docker container keep appearing when I've clearly removed or commented them out from my .env file. I'm fairly new to Docker and don't know if this is expected behavior or an anomaly.
The way my system is setup, I spin up an instance of Azure's IoT Edge server locally (via deployment.template.json) which builds the Docker container and populates the environmental variables using the associated .env file.
Now what's perplexing me is that if I were to completely stop the server (not pause), comment out/remove the variable from the .env file, restart the server, and inspect the container (docker container inspect), I still see the variable name and value. I've also used docker system prune -a --volumes after stopping the server to prune my system and volumes, then restarted the server only to see the variable still listed.
Just in case it helps, inside my deployment.template.json I'm passing my variables as MY_VAR=${MY_VAR}. Then in my .env file I have the variable as MY_VAR=abc123.
From my Dockerfile:
# -------------
# Build Sources
# -------------
FROM node:10-alpine as builder
# Install additional git and openssh dependencies and make sure GitLab domain is accepted by SSH
RUN apk add --no-cache openssh git curl \
&& mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan gitlab.com github.com >> /root/.ssh/known_hosts
WORKDIR /app
# Install app dependencies
RUN npm i -g typescript --no-cache --silent
COPY package*.json ./
RUN npm ci --only=production --silent
# Copy sources and build
COPY . .
RUN npm run build
# ----------------
# Production Image
# ----------------
FROM node:10-alpine
RUN apk add --no-cache curl
WORKDIR /app
COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/dist /app/dist
COPY . .
USER node
CMD ["node", "dist/index.js"]
you can run "docker inspect" to your container and see what environment variables are defined in docker create options.
you can also check docker create options in Azure Portal.

Running a bash script before startup in an NGINX docker container

I'm trying to run a javascript app on localhost:8000 using docker. Part of what I would like to do is swap out some config files based on the docker run command, I'd like to pass an environment variable into the container so that the bash script can use that as a parameter.
What my dockerfile is looking like is this:
FROM nginx
COPY . /usr/share/nginx/html
CMD ["bash","/usr/share/nginx/html/runfile.sh"]
And the bash script looks like this:
#!/bin/bash
if [ "$SECURITY_VERSION" = "OPENAM" ]; then
sed -i -e 's/localhost/openam/g' authConfig.js
fi
docker run -p 8000:80 missioncontrol:latest -e SECURITY_VERSION="TEST"
Docker gives me an exception saying -e exec command not found.
However if I change the dockerfile to use ENTRYPOINT instead of CMD, the -e flag works but the webserver does not start up.
Is there something I'm missing here? Is the ENTRYPOINT being overriden or something?
EDIT:
So I've updated my dockerfile to use ENTRYPOINT ["bash","/usr/share/nginx/html/runfile.sh", ";", " nginx -g daemon off;"]
But the docker container still shuts down. Is there something I'm missing?
NGINX 1.19 has a folder /docker-entrypoint.d on the root where place startup scripts executed by thedocker-entrypoint.sh script. You can also read the execution on the log.
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will
attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in
/docker-entrypoint.d/
/docker-entrypoint.sh: Launching
[..........]
/docker-entrypoint.sh: Configuration complete; ready for start up
For my future self and everybody else, this is how you can set up variable substitution at startup (for nginx, may also work for other images):
I've also wrote a more in depth blog post about it: https://danielhabenicht.github.io/docker/angular/2019/02/06/angular-nginx-runtime-variables.html
Dockerfile:
FROM nginx
ENV TEST="Hello variable"
WORKDIR /etc/nginx
COPY ./substituteEnv.sh ./substituteEnv.sh
# Execute the subsitution script and pass the path of the file to replace
ENTRYPOINT ["./substituteEnv.sh", "/usr/share/nginx/html/index.html"]
CMD ["nginx", "-g", "daemon off;"]
subsitute.sh: (same as #Daniel West's answer)
#!/bin/bash
if [[ -z $1 ]]; then
echo 'ERROR: No target file given.'
exit 1
fi
#Substitute all environment variables defined in the file given as argument
envsubst '\$TEST \$UPSTREAM_CONTAINER \$UPSTREAM_PORT' < $1 > $1
# Execute all other paramters
exec "${#:2}"
Now you can run docker run -e TEST="set at command line" -it <image_name>
The catch was the WORKDIR, without it the nginx command wouldn't be executed. If you want to apply this to other containers be sure to set the WORKDIR accordingly.
If you want to do the substitution recursivly in multiple files this is the bash script you are looking for:
# Substitutes all given environment variables
variables=( TEST )
if [[ -z $1 ]]; then
echo 'ERROR: No target file or directory given.'
exit 1
fi
for i in "${variables[#]}"
do
if [[ -z ${!i} ]]; then
echo 'ERROR: Variable "'$i'" not defined.'
exit 1
fi
echo $i ${!i} $1
# Variables to be replaced should have the format: ${TEST}
grep -rl $i $1 | xargs sed -i "s/\${$i}/${!i}/Ig"
done
exec "${#:2}"
I know this is late but I found this thread while searching for a solution so thought I'd share.
I had the same issue. Your ENTRYPOINT script should also include exec "$#"
#!/bin/sh
set -e
envsubst '\$CORS_HOST \$UPSTREAM_CONTAINER \$UPSTREAM_PORT' < /srv/api/default.conf > /etc/nginx/conf.d/default.conf
exec "$#"
That will mean the startup CMD from the nginx:alpine container will run. The above script will inject the specified environment variables into a config file. By doing this in runtime yo can override the environment variables.
Update the CMD line as below in the your dockerfile. Please note that if runfile.sh does not succeed (exit 0; inside it) then the next nginx command will not be executed.
FROM nginx
COPY . /usr/share/nginx/html
CMD /usr/share/nginx/html/runfile.sh && nginx -g 'daemon off;'
nginx docker file is using a CMD commnd to start the server on the base image you use. When you use the CMD command in your dockerfile you overwrite the one in their image. As it is mentioned in the dockerfile documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
NginX image has docker-entrypoint.d included and on container start will look for any scripts located in there. You can add your custom scripts during docker build. I also found that if you are using alpine image, bash is not installed, so you can add it yourself by running:
RUN apk update
RUN apk upgrade
RUN apk add bash
sample DockerFile:
FROM nginx:alpine
EXPOSE 443
EXPOSE 80
RUN apk update
RUN apk upgrade
RUN apk add bash
COPY ["my-script.sh", "/docker-entrypoint.d/my-script.sh"]
RUN chown nginx:nginx /docker-entrypoint.d/my-script.sh
USER nginx
In order to limit scope execution of your custom script script, it's highly recommended to run your container as a non-privileged user.
nginx container already defines ENTRYPOINT. If you define also CMD it will combine them both like 'ENTRYPOINT CMD' in such way that CMD becomes argument of ENTRYPOINT. That is why you need to redefine ENTRYPOINT to get it working.
Usually ENTRYPOINT is defined in such way, that if you also pass CMD, it will be executed by ENTRYPOINT script. However this might not be case with every container.

Magento 2 not loading CSS and JavaScript

I have installed magento 2 successfully on localhost but I am not able to see admin panel as it render 404 error to me.
Secondly, When I open front-end then CSS and JavaScript is not loading.They also renders 404 errors.
Also When I try to run command:
{your Magento install dir}/bin/magento setup:static-content:deploy
I got the following error
[InvalidArgumentException]
There are no commands defined in the "setup:static-content" namespace.
This One Worked for Me.
use this Command php bin/magento setup:static-content:deploy
Step 1 : In CMD Open your root directory using cd command
Step 2 : php bin/magento setup:static-content:deploy - run this Command
then,Check Your pub/static folder, CSS and Js files will be available there
Refer this
Here is the simplest solution if showing version name in css path
like : pub/static/version323334/
then run this query in my sql
INSERT INTO core_config_data (path, value)
VALUES ('dev/static/sign', 0)
ON DUPLICATE KEY UPDATE value = 0;
after that clear the config cache
bin/magento cache:clean config
you can also disable static file version form admin
Try the same command as root user by adding sudo like below
sudo php bin/magento setup:static-content:deploy
Changing dev/static/sign to 0 in core config data worked for me
then,
bin/magento cache:flush
then,
php bin/magento setup:static-content:deploy -f
Though it is old question, its answers could not help me to fix my issue what would generate same error message. I suggest the followings:
At first, you can find its underlying cause. In your magento2 directory, find magento error as follows:
tail var/log/system.log
Or find if there is any error in php error log file
If you find isolated error, fix it.
If no error is found, do the followings. Remove generate folder by executing the following commands:
sudo rm -rf pub/static
sudo rm -rf var/cache
sudo rm -rf var/composer_home
sudo rm -rf var/generation
sudo rm -rf var/page_cache
sudo rm -rf var/view_preprocessed
After deleting them, you can re-create them by executing the following commands:
sudo php bin/magento setup:static-content:deploy -f
We also faced this issue once, and sorted it out. For this you need to go directly in bin directory and use that command. For example,
php magento setup:static-content:deploy
Some times if you run this command outside of bin or from any directory
php bin/magento setup:static-content:deploy
then you will get error like (may be it is because of linux system),
[InvalidArgumentException]
There are no commands defined in the "setup:static-content" namespace.
Update:
If any *.xml file in our custom modules is not valid then the same error will come.
I also had same issue and below step solves my issue:
Step 1: Navigate to directory where magento is installed.
Step 2: Run "sudo php bin/magento setup:static-content:deploy"
Provide static deploy command like this:
php bin/magento setup:static-content:deploy
Please set full permission for pub and var folders.
After trying all the solutions mentioned over here and in https://magento.stackexchange.com/questions/97209/magento-2-css-and-javascript-not-loading-from-correct-folder
We were not able to get this thing going.
But this is very weird answer but it worked for us.
Firstly, We cleared the cache and ensured that the static files are being created inside pub/static/ folder.
Then checked the deployed_version.txt contains same version number as being loaded in the URL.
Our main culprit was the .htaccess file which was present in pub folder. There should be just one .htaccess file inside pub/static folder but not in pub folder.
This did the trick for us after searching everywhere.
Hope it helps others looking for the similar answer.
Run the following commands in CLI interface of your Magento2 root folder:
$ php bin/magento setup:static-content:deploy
$ php bin/magento indexer:reindex
Then delete var folder by this command at your root of magento2.
$ rm -rf var/*
Then refresh your homepage and admin panel.
If you facing problem of css and design after installation in Windows, follow these steps
php bin/magento setup:static-content:deploy
php bin/magento indexer:reindex
make sure apache "rewrite_module" is enable and then restart the server
delete cache folder under var/cache
you just need to run this command on your Terminal
php bin/magento setup:static-content:deploy
Make sure you give the the root path of your magento in terminal and then run the above command
Just Open the
MAGENTO_ROOT/app/etc/di.xml
and replace below code form line number 574
<item name="view_preprocessed" xsi:type="object">Magento\Framework\App\View\Asset\MaterializationStrategy\Symlink</item>
TO
<item name="view_preprocessed" xsi:type="object">Magento\Framework\App\View\Asset\MaterializationStrategy\Copy</item>
DELETE
MAGENTO_ROOT/pub/static/_requirejs
MAGENTO_ROOT/pub/static/adminhtml
MAGENTO_ROOT/pub/static/frontend
Just simple and right solution, hope usefull.
Go to your wampserver icon and click on it and than
Apache->apache modules->rewrite_module[enable this]
After this re-start all services and check it .
This error happens when you have not setup permission correctly. It can't see that the command actually exist.
Try running:
sudo find . -type d -exec chmod 770 {} \; && sudo find . -type f -exec chmod 660 {} \; && sudo chmod u+x bin/magento
sudo chown -R $(whoami):www-data .
Change www-data to appropriate webserver user. e.g. apache or www-data.
This worked for me:
1) Static content deploy. Run the below command from Magento 2 root directory:
sudo php bin/magento setup:static-content:deploy
2) Clear everything in var/cache directory or flush the Magento 2 cache using the below command:
php bin/magento cache:flush
3) Set proper permissions for Magento 2 directories by executing the below command from Magento 2 root directory:
sudo find . -type d -exec chmod 770 {} \; && sudo find . -type f -exec chmod 660 {} \; && sudo chmod u+x bin/magento
Hope this helps.
I just do
rm -rf var/di
then it works again.
Usually this happens because of failed compilation in the var/di folder. You can solve it by deleting everything in your var folder.
Also for future, don't forget that magento command line implements symfony verbosity levels: append --v or ---v to your command to see the exact error.
Please follow the steps below to get rid of this issues.
1) Download the Magento 2.
2) Extract this in your www OR htdocs directory.
3) Install the magento. Do not use localhost, use 127.0.0.1 in store url and admin url.
4) After successful installation DO NOT RUN MAGENTO.
5) Now delete the cache / session of magento 2. Go to the below mentioned paths and delete the files.
Magento Root > var > cache > Delete all files
Magento Root > var > page_cache > Delete all files
Magento Root > var > session > Delete all files
6) Change the behavior of symlinks for some static resources as mentioned below:-
When Magento 2 is not in production mode, it will try to create symlinks for some static resources on local server. We have to change that behavior of Magento 2 by going to edit ROOT > app > etc > di.xml file. Open up di.xml in your favorite code editor, find the virtualType name="developerMaterialization" section. In that section below, you will find an item <item name="view_preprocessed" xsi:type="object"> which needs to be modified. You can modify it by changing the following content:
Magento\Framework\App\View\Asset\MaterializationStrategy\Symlink
To:
Magento\Framework\App\View\Asset\MaterializationStrategy\Copy
7) Delete all the files except .htaccess
Magento Root > pub > static > Delete all files except **.htaccess**
ITS DONE. Now you may run the magento Front and Backend URL
The following answer works for me, thanks:
open file MAGENTO_ROOT/app/etc/di.xml
and replace below code form line number 574
Magento\Framework\App\View\Asset\MaterializationStrategy\Symlink
with
Magento\Framework\App\View\Asset\MaterializationStrategy\Copy
DELETE
MAGENTO_ROOT/pub/static/_requirejs
MAGENTO_ROOT/pub/static/adminhtml
MAGENTO_ROOT/pub/static/frontend
below Solution worked:
Please un below query in database.
INSERT INTO core_config_data (path, value) VALUES ('dev/static/sign', 0) ON DUPLICATE KEY UPDATE value = 0;
https://magento.stackexchange.com/questions/97209/magento-2-css-and-javascript-not-loading-from-correct-folder
if you have tried php bin/magento setup:static-content:deploy or any such related commands and the issue is still there. Then you may like to try this.
This fix addresses No Css and Javascript and Admin 404 page issues after magento installation (v2.3).
step 1 : open httpd.conf.
step 2 : search for
AllowOverride (may be written as AllowOverride all)
Require (may be written as Require local)
in directory section of this file.
step 3 : Change
AllowOverride to AllowOverride All
Require to Require All Granted
If you are facing theming issue after Installation in Magento 2. You can follow these steps :
Run this query:
INSERT INTO core_config_data (path, value) VALUES ('dev/static/sign', 0)
ON DUPLICATE KEY UPDATE value = 0;
For local machine run this query:
UPDATE core_config_data SET value = '0' WHERE
core_config_data.path LIKE '%web/seo/use_rewrites%';
Remove all the files from pub and var directory:
sudo rm -rf var/di var/generation/ var/page_cache/ var/cache/
pub/static/frontend/ pub/static/adminhtml/ pub/static/_requirejs/
pub/static/deployed_version.txt
Give Permission to var and pub directories of your project:
sudo chmod -R 777 var/* pub/*
Upgrade Setup:
sudo bin/magento setup:upgrade
Deploy content:
sudo php bin/magento setup:static-content:deploy
After these steps, you will be able to see proper theme.
I've faced the same issue and i get it resolved using the following procedure.
php bin/magento setup:static-content:deploy
php bin/magento c:f
sudo chmod -R 777 var/ pub/ generated/
sudo chown -r your-website-user:your-website-group
ln -s static pub/static
ln -s media pub/media
If all of the above is not working try to set 0 in the below paths in core_config_data table
web/secure/use_in_frontend
web/secure/use_in_adminhtml
If you are facing problem of css and js page load design after installation in magento2
please follow the following step-:
open the terminal and navigate to magento web root
$ cd /var/www/html/magento2
Step 1.
$ php bin/magento setup:static-content:deploy
Step 2.
$ php bin/magento indexer:reindex
Step 3.
make sure apache “rewrite_module” is enable and then restart the server
Step 4.
$ chown -R www-data:www-data /var/www/html/magento2
Step 5.
$ chmod -R 777 /var/www/html/magento2
Step 6.
delete cache folder under var/cache
The above step working. I hope this will work for you also.
Let me know if any issue. :)

Categories