Avoid overwritte files when .git push with .gitignore - javascript

I have a project that is worked in local and then in cloud.
When we are in local we need to do some changes in some file to work well, and then rewrite this files to upload again when make the git push.
For example a file:
'use strict';
const mysql = require('mysql');
/* const dbConn = mysql.createPool({
connectionLimit: 5,
host: process.env.MYSQL_CONNECTION_STRING.split(':')[0],
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_CONNECTION_STRING.split('/')[1],
charset: 'utf8mb4'
});
module.exports = dbConn; */
const dbConn = mysql.createPool({
connectionLimit: 5,
host: 'localhost',
user: 'root',
password: 'Mraixa2015L',
database: 'tool_bbdd',
charset: 'utf8mb4'
});
module.exports = dbConn;
in local use mysql local with my credentials in cloud the others, and everytime i have to comment and uncomment to work.
this is one example, but i need to do things like this in other files.
I think if is possible to include in .gitignore file this files.
Inside my .gitignore
config/db.config.js
And when i make the git push, not upload this changes and not overwritte the data to use then in the cloud.
is it possible?
Thanks

The point is not about pushing. What will be pushed is what is the the commits.... whatever it is. Git can't just remove a file from a commit to push. What you should care about is about committing it in the first place.
If the file is already tracked, .gitignore makes no difference. You can ask git to ignore it if it is already tracked with git update-index --assume-unchanged.... or, what I do sometimes, is keep those changes in a private branch.... so you can "easily" apply them / unapply them
git show X/some-private-branch | git apply # boom! I have my changes there
# when I want to remove them
git show X/some-private-branch | git apply -r # The change is gone

The situation here is that you want to ignore some changes to tracked files with Git. Unfortunately, as the Git FAQ mentions, there's no way to do that with Git. Specifically, using git update-index for this purposes doesn't work properly:
It’s tempting to try to use certain features of git update-index, namely the assume-unchanged and skip-worktree bits, but these don’t work properly for this purpose and shouldn’t be used this way.
The easiest way to solve this problem is to include two separate files, one which has the production values and one which has the development values, both with names independent of the actual desired location, and then use a script to copy the correct file into the desired location, which is ignored (and not tracked). You can even have that script adjust the development or production values to include additional data that's appropriate based on things like environment variables.

Related

How to hide my database string in a github public repo

I uploaded my repo and it has a database string named 'dbstring' which I do not want to share with anyone.
I created a repository secret on github and created a value named DBSTRING with its value but the thing is I dont know how to access it.
this is my uploaded code which reveals my dbstring.
const dbstring = mongodb+srv:/***********b.net
mongoose.connect(dbstring, { useUnifiedTopology: true, useNewUrlParser: true });
const db = mongoose.connection;
db.once('open', () => {
console.log('Database connected:', url);
});
How can I replace dbstring with secret value I created on my github repo?
What you need to do is to use Environment variables, where you can have a .env ( if you use dotenv ) for each environment. Then you keep your database credentials safe on your computer and on the server, this will also make it possible to target different environments like database production, dev, test, etc. Make sure you have .env file added in the .gitignore file.
It's also important that when you run this code it's executed on the server-side otherwise anyone with the dev tools open will be able to see the credentials as well. Then on your client side you make a request using axios to the URL related to that database connection.
If the ENV file works for you then what you can do is you can encrypt it before uploading it to the GitHub like creating an env-production file and encrypting it and once you use that repo you can decrypt it and you can also add that step to your CD/CI Line use this

Publish a site in a subfolder with traefik and Next JS

I'm trying publish a website in a subfolder (exemple.com/sitename) usign Traefik.
The site is build with Next JS.
What is happening when I run deploy is that all script links in the builded site disregard the folder (sitename). For example, js script named generatedfile.js is being accessed by the link example.com/generatedfile.js, the corret way would be example.com/sitename/generatedfile.js
My traefik args:
-l traefik.frontend.rule="Host:example.com; PathPrefixStrip:/sitename" -l traefik.frontend.entryPoints="http, https" -l traefik.frontend.headers.SSLRedirect="true"
I had tried add basePath to my next.config.js, but when I do this, I only access de site in the link exemple.com/sitename/sitename
next.config.js:
module.exports = withFonts({
basePath: '/sitename'
});
I'm using docker to deploy in AWS.
I've been trying to solve this all day, I don't even know what else to try to solve it.
Sorry for my english, it's not my first language.
PathPrefixStrip means match the path and strip the matched string before forwarding the request to your application. Use PathPrefix instead.
Looks like you're using v1.x of Traefik. Here's the documentation explaining the difference better: https://doc.traefik.io/traefik/v1.7/basics/
It's worth mentioning that if you have multiple routing rules, Traefik sorts them by their string length in a descending order and goes through them to match the incoming request. In other words, /api is matched before /.

How can I have a host and container read/write the same files with Docker?

I would like to volume mount a directory from a Docker container to my work station, so when I edit the content in the volume mount from my work station it updated in the container as well. It would be very useful for testing and develop web applications in general.
However I get a permission denied in the container, because the UID's in the container and host isn't the same. Isn't the original purpose of Docker that it should make development faster and easier?
This answer works around the issue I am facing when volume mounting a Docker container to my work station. But by doing this, I make changes to the container that I won't want in production, and that defeats the purpose of using Docker during development.
The container is Alpine Linux, work station Fedora 29, and editor Atom.
Question
Is there another way, so both my work station and container can read/write the same files?
There are multiple ways to do this, but the central issue is that bind mounts do not include any UID mapping capability, the UID on the host is what appears inside the container and vice versa. If those two UID's do not match, you will read/write files with different UID's and likely experience permission issues.
Option 1: get a Mac or deploy docker inside of VirtualBox. Both of these environments have a filesystem integration that dynamically updates the UID's. For Mac, that is implemented with OSXFS. Be aware that this convenience comes with a performance penalty.
Option 2: Change your host. If the UID on the host matches the UID inside the container, you won't experience any issues. You'd just run a usermod on your user on the host to change your UID there, and things will happen to work, at least until you run a different image with a different UID inside the container.
Option 3: Change your image. Some will modify the image to a static UID that matches their environment, often to match a UID in production. Others will pass a build arg with something like --build-arg UID=$(id -u) as part of the build command, and then the Dockerfile with something like:
FROM alpine
ARG UID=1000
RUN adduser -u ${UID} app
The downside of this is each developer may need a different image, so they are either building locally on each workstation, or you centrally build multiple images, one for each UID that exists among your developers. Neither of these are ideal.
Option 4: Change the container UID. This can be done in the compose file, or on a one off container with something like docker run -u $(id -u) your_image. The container will now be running with the new UID, and files in the volume will be accessible. However, the username inside the container will not necessarily map to your UID which may look strange to any commands you run inside the container. More importantly, any files own by the user inside the container that you have not hidden with your volume will have the original UID and may not be accessible.
Option 5: Give up, run everything as root, or change permissions to 777 allowing everyone to access the directory with no restrictions. This won't map to how you should run things in production, and the container may still write new files with limited permissions making them inaccessible to you outside the container. This also creates security risks of running code as root or leaving filesystems open to both read and write from any user on the host.
Option 6: Setup an entrypoint that dynamically updates your container. Despite not wanting to change your image, this is my preferred solution for completeness. Your container does need to start as root, but only in development, and the app will still be run as the user, matching the production environment. However, the first step of that entrypoint will be to change the user's UID/GID inside the container to match your volume's UID/GID. This is similar to option 4, but now files inside the image that were not replaced by the volume have the right UID's, and the user inside the container will now show with the changed UID so commands like ls show the username inside the container, not a UID to may map to another user or no one at all. While this is a change to your image, the code only runs in development, and only as a brief entrypoint to setup the container for that developer, after which the process inside the container will look identical to that in a production environment.
To implement this I make the following changes. First the Dockerfile now includes a fix-perms script and gosu from a base image I've pushed to the hub (this is a Java example, but the changes are portable to other environments):
FROM openjdk:jdk as build
# add this copy to include fix-perms and gosu or install them directly
COPY --from=sudobmitch/base:scratch / /
RUN apt-get update \
&& apt-get install -y maven \
&& useradd -m app
COPY code /code
RUN mvn build
# add an entrypoint to call fix-perms
COPY entrypoint.sh /usr/bin/
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["java", "-jar", "/code/app.jar"]
USER app
The entrypoint.sh script calls fix-perms and then exec and gosu to drop from root to the app user:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
# running on a developer laptop as root
fix-perms -r -u app -g app /code
exec gosu app "$#"
else
# running in production as a user
exec "$#"
fi
The developer compose file mounts the volume and starts as root:
version: '3.7'
volumes:
m2:
services:
app:
build:
context: .
target: build
image: registry:5000/app/app:dev
command: "/bin/sh -c 'mvn build && java -jar /code/app.jar'"
user: "0:0"
volumes:
- m2:/home/app/.m2
- ./code:/code
This example is taken from my presentation available here: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#fix-perms
Code for fix-perms and other examples are available in my base image repo: https://github.com/sudo-bmitch/docker-base
Since the UID in your containers are baked into the container definition, you can safely assume that they are relatively static. In this case, you can create a user in your host system with the machine UID and GID. Change user to the new account, and then make your edits to the files. Your host OS will not complain since it thinks it's just the user accessing its own files, and your container OS will see the same.
Alternatively, you can consider editing these files as root.

knexfile.js does not read Dotenv variables

So I am trying out knexjs and the first setup works like a charm. I've set up my connection, created a datastructure and in my terminal i ran $ knex migrate:latest.
It all worked fine... the migrated tables showed up in my database ran the migrate again and got Already up to date.
Now here is where I get an issue: Using Dotenv... Here is my code:
require('dotenv').config();
module.exports = {
development: {
client: process.env.DB_CLIENT,
connection: {
host: process.env.DB_HOST,
user: process.env.DB_ROOT,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
charset: process.env.DB_CHARSET
}
}
};
As far as i can see nothing wrong with it and when i run the script through node no errors show up.
Then I wanted to check if I still could do a migrate and i get the following error:
Error: ER_ACCESS_DENIED_ERROR: Access denied for user ''#'[MY IP]'
(using password: YES)
I am using the same vars only this time from my .env file. But when i look at the error nothing is loaded from it, and yes both the knexfile.js and .env are in the root of my project :) Among the things i tried is setting the path in different ways within require('dotenv').config(); but then it would throw an error from dotenv meaning the file was already correctly loaded.
Can anyone help me figuring this out?
So after some trial and error i finally figured out what was wrong. I don't know what caused it but somehow the install of Knex wasn't done properly...
I un- and reinstalled Knex (local and global). Then first I installed it on the global level and than as a dependency. After that I initialized Knex again ( $ knex init ) and started from the ground up.
I think, but i am still not sure why because i could not find any info about it, the order of installing Knex matters (or mattered in my case and i am not even sure what i did wrong the first time).
On the side
If you are new to Knex and just blindly follow a random tutorial/article and just create a new file for Knex (i.e. knexfile.js), Knex will still work but other packages could fail to execute properly. This is what i don't see in most articles i found, read the documentation on how to generate the files needed (migrations and seeds). Most articles don't cover these steps properly.
Hope this is worth anything

Removing Migrations With Knex.js In My Node.js Application

I am trying to get knex working in my node.js application. I was following a tutorial and at some point created a table but could not repeat the process. I removed the table and deleted all the migrations folders. At tis point I started over but after creating a new migration and then running knex migrate:latest I get an error saying the migration directory is corrupt because the original migration I had is missing.
I was under the impression that if the file is missing it should not know it was ever there.
What is the proper way to remove a migration from my project?
knexfile.js
development: {
client: 'pg',
connection: {
host: '127.0.0.1',
user: 'postgres',
password: 'password',
database: 'myDatabase'
},
pool: {
min: 10,
max: 20
},
migrations: {
directory: __dirname + '/db/migrations'
},
seeds: {
directory: __dirname + '/db/seeds/development'
}
db.js
var config = require('../knexfile.js');
var env = 'development';
var knex = require('knex')(config[env]);
module.exports = knex;
console.log('Getting knex');
knex.migrate.latest([config]);
console.log('Applying migration...');
Running this gives error,
knex migrate:latest
Using environment: development
Error: The migration directory is corrupt, the following files are missing: 20161110130954_auth_level.js
but this migration does not exist because I deleted it.
You had to rollback a migration (knex migrate:rollback) before deleting the file, I guess what you can do is:
touch [full_path_to_migrations_here]/migrations/20161110130954_auth_level.js
knex migrate:rollback
rm [full_path_to_migrations_here]/migrations/20161110130954_auth_level.js
Reference here
https://github.com/tgriesser/knex/issues/1569
Your migration files have been deleted,
but they are still referenced in a table called "migrations"
(see update below), which was generated by knex.
You should be able to check it by connecting to your local database.
I faced the same problem and I solved it by removing records corresponding to my deleted migration files.
delete from migrations
where migrations."name" in ('20191027220145_your_migration_file.js', ...);
EDIT: The migrations table name might change according to the options or to the version you use.
The constant is set here and used there.
To be sure of the name, you could list all tables as suggested by #MohamedAllal.
Hope it helps
To remove either you can rollback and so rollback then remove.
Or you can not ! And follow on the bellow:
(> The bellow, answer too the following error which you may already get to see:<)
Error: The migration directory is corrupt
Which will happens if
The migrations files are deleted, while the records on the migration table created by knex remains there!
So simply clear them up!
(remove then clear up, or clear up then remove!)
Important to note the migration table by now is knex_migration. Don't know if it was different in the past!
But better list the db tables to make sure!
I'm using postgres! Using psql :
> \d
i get :
You can do it with Raw SQL! Using your db terminal client, Or using knex itself! Or any other means (an editor client (pgAdmin, mysql workbench, ...).
Raw sql
DELETE FROM knex_migration
WHERE knex_migration."name" IN ('20200425190608_yourMigFile.ts', ...);
Note you can copy past the files from the error message (if you get it)
ex: 20200425190608_creazteUserTable.ts, 20200425193758_createTestTestTable.ts
from
Error: The migration directory is corrupt, the following files are missing: 20200425190608_creazteUserTable.ts, 20200425193758_createTestTestTable.ts
Copy past! And it's fast!
(you can get the error by trying to migrate)
Using knex itself
knex('knex_migration')
.delete()
.whereIn('name', ['20200425190608_yourMigFile.ts', ...]);
Create a script! Call your knex instance! Done! Cool!
After cleaning
The migrations will run nicely! And your directory no more corrupt!
How much i do love that green!
Happy coding!
Removing a migration: (Bring it down then remove to remove)
What is the right way to remove a migration?
The answer is bring that one migration down and then remove it's file!
Illustration
$ knex migrate:down "20200520092308_createUsersWorksTable.ts"
$ rm migrations/20200520092308_createUsersWorksTable.ts
You can list the migrations to check as bellow
$ knex migrate:list
(from v0.19.3! if not available update knex (npm i -g knex))
Alter migration to alter only (No)
If you are like me and like to update the migration directly in the base migration! And you may think about creating an alter migration! run it then remove it!
A fast flow! You just make the update on the base creation table! Copy past into the new created alter table! And run it then remove it!
If you're thinking that way! Don't !!!
You can't rollback because it's the changes that you want! You can't cancel them!
You can do it! And then you have to clear the records! Or you'll get the error! And just not cool!
Better create an later file script! Not a migration file! And run it directly! Done!
My preference is to create an alter.ts (.js) file in the Database folder!
Then create the alter schema code there! And create an npm script to run it!
Each time you just modify it! And run!
Here the base skeleton:
import knex from './db';
(async () => {
try {
const resp = await knex.schema.alterTable('transactions', (table) => {
table.decimal('feeAmount', null).nullable();
});
console.log(resp);
} catch (err) {
console.log(err);
}
})();
And better with vscode i just use run code (if run code extension is installed! Which is a must hhhh)!
And if no errors then it run well! And you can check the response!
You can check up the schema api in the doc!
Also to alter you'll need to use alter() method as by the snippet bellow that i took from the doc:
// ________________ alter fields
// drops previous default value from column, change type
// to string and add not nullable constraint
table.string('username', 35).notNullable().alter();
// drops both not null constraint and the default value
table.integer('age').alter();
Now happy coding!
try to use the option below, it will do exactly what it refers to:
migrations: {
disableMigrationsListValidation: true,
}
knex.js migration api
Since you are in development with your seeds and migration files that you are able to recreate the tables and data you can just drop and recreate the databases using the available knex migrations.
My development environment was using sqlite so rm -rf dev.sqlite3 on terminal fixed this.
With postgresql it will be dropdb "db_name" on the terminal or see this for more alternatives.
This is simple and there is no need to do a rollback after that.
The record about your old migrations is gone and you can recreate it with a new knex migrate:latest.
If using the postgresql on Heroku navigate to the database credentials and clicking the "Reset Database" button works
You can reset the database by running this command.
Heroku run pg:reset --confirm app-name
Hot fix
Issue is happening because the deleted migrations are still the knex_migrations table in the database so the knex_migrations table does not match with the migrations file in the working directory. You have to delete the migrations from the knex_migrations table.
delete
from knex_migrations km
where km."name" = 'deleted_migration.js'
Go to the graphical user interface you are using to see the data in the database and inside tables, there will be two other tables along with the tables you made named knex_migrations and knex_migrations_lock. Delete both of these files and others tables that you created. Then run
knex migrate:latest
Caution: This will remove all data from the database(That's fine if you are using it for development)

Categories