I develop an angular-php web application which I have it running online, for different users, on 5 different subdomains, such us:
sub1.mydomain.com
sub2.mydomain.com
sub3.mydomain.com
sub4.mydomain.com
sub5.mydomain.com
Problem:
My problem is that I still develop the web-app local and whenever I change files(php, js,tpl.html,css or when add new ones) I have to upload them on each subdomain.
Question:
Is there a way/library/API whatever that I can use to make something like package (with the updated or new files) and just call it from each subdomain url , and make the appropriate updates?
Or should I just copy them to each subdomain?
Do I make myself clear, in other words just like on cms systems that we press the update button and we update a component/module.
If anyone knows a way of doing that please enlight me. Thanks.
I tried to depict what i mean.
What you are describing is called deployment.
There are a lot of ways to create a deployment mechanism so there is not a single answer to your question. Depends of the tools that you are using, the servers where your app is hosted, etc.
If not, I advise you to use Git to make versions of your app (with Github or Gitlab) and automate the deployment process when you push a new piece of code.
You can make your own scripts to deploy or use online services (surely what you need because of "systems that we press the update button").
I can't advice you one particular service but you would find what you need in Googling "deployment automation github".
I would do it with config files. Considering the code for all my substations is the same. I would have config for each sub-domain and fetch the core files from the same location but serving different data If your structure allows it.
Related
I'm working on a Quizz with Html/JS on Github and which will be dedicated to my comrades.
I would like to be able to read everyone’s answers so I thought about creating a text or csv file with their answers that would be saved in a specific directory of the github project.
But I’m a beginner and I don’t know if that’s possible, i’ve seen tracks that use PHP or NodeJs with FileSaver.js, but I haven’t managed any of them because i would like it to be automatic, not to ask the user to download his answers.
If some people knwo how to do it or explain me why it’s impossible and how to do it otherwise it’ll be cool.
Thanks ! ;)
Unless you want to make every person using the quiz a contributor to your Github project (which will require that they sign up for Github accounts and tell you their account name so you can manually grant them permission) and then use the API to read the CSV file, modify it, then commit the change (and resolve any merge conflicts caused by race conditions): This is not possible (and if you are willing to do that, then it is among the most complex approach that you could take).
If you want to store and aggregate data submitted by visitors to a website then write some server-side code (using whatever language and frameworks you like, PHP and Node.js are both options) and use a web hosting service designed to support them. Github Pages is designed only for static pages and doesn't support any form of server-side programming.
Once you store the data in a file, just use git commands to commit and push it.
Hi I am new to Adobe CEP panels, so please forgive me if this is a frequently asked question
I have built my first CEP panel and it is working well, however, in order to make this more useful to others within the company, it would make sense if I were able to host it on one of the company's webservers then anyone in the company could access it. Also, any updates to the html/js/jsx would then only need to be made in one place. I've spent quite some time googling but I've not found any examples.
I would be grateful for any thoughts or suggestions on this.
Technically NO!!!!
Well as you've mentioned that you've already made your first CEP panel and it's also working fine, I can assume you've gone through the method of loading test panels via PlayerDebugMode 1.
So here is the catch. Photoshop only recognises 2 type of extensions called Signed and Unsigned. So when you create a new panel in your local appdata/library - CEP - extensions folder, You creates a unsigned panel and photoshop considers that as test/development panel. When you finish your panel and then make it distributable by either making installable package like .exe or .dmg or .zxp, It gets signed and photoshop considers them as signed plugins and will install it automatically on their legit path.
if I were able to host it on one of the company's webservers then anyone in the company could access it.
The only way I could manage was sharing latest .zxp to my work server where my other designer mates copied and installed them into their system. That's what you can do if you really want to share but you can't assign some global path to extensions because photoshop won't allow you to do it. Regards!
I have started working on a project that needs a re-write. So, instead of doing a big bang release
we have decided to use Strangler Pattern which means the following
The current application (stack details below) will be running as is under the existing domain https://app.com
The existing (and new) features will be re-written in a new stack (details below) and deployed in parallel to the existing app (under the same domain https://app.com)
The requirements are
The end-user always works with the same domain https://app.com
Any existing feature migrated to a new app or a new feature is available by the under the same domain https://app.com
The stack and architecture of the current app is
HTML files with hardcoded data
CSS files
font files
PDFs
images
flash files
among other things.
Thee application is static. It has no database. It makes calls to other 3rd party APIs but does not have its own database (other than the files, and the images)
It sits under a directory and is served by running a web server (Apache) on a private dedicated server.
The stack and architecture of new re-write will
Use React or Gatsby
A standard build system that generates the static files
The data (PDF, Images) hosted somewhere else
Flash files (until we figure out a better way)
Given these requirements, I thought of having 2 versions of the app using some sort of load balancer such as Nginx and serve the URL patterns using a proxy.
For example
a request coming to https://app.com/productPage.html goes to existing app deployment (assuming it is not migrated)
a request coming to https://app.com/profilePage goes to existing app deployment (assuming it is migrated)
Now, considering this situation, I want to ask the following question
Is this approach looks sane? Are there better ways to deal with this situation?
How to implement such a reverse-proxy based system (considering Nginx)? (or if there is a better way)
I would love to hear out ideas and any resources/books/github that can help me learn and implement this.
Thanks a lot in advance!
I would recommend to create a v2 of pages that has been migrated to new functionality. And all links to the page should be updated to point to v2.
If anyone has done bookmark to old links, then those pages can simply redirect the user to the v2 ones by simply redirecting them using JS - window.location(url_of_target_page);
I built my first javascript app using HTML and CSS as well. It is a basic tip calculator. I've pushed my code to my GIT but when i click on website deployed its just the README file. I have a feeling I need to use Node.js but after the basic reading I did on it I have no idea how to accomplish this. I just want to push the app to heroku or even just off the GIT page to see it in action on another device.
I tried running npm init and it created a package.JSON but every time i pushed to heroku the app would crash and give me an error stating it can not find the "start" script i input.
here's my github for the app, https://github.com/jaronow/tip-calculator. I would appreciate some basic guidance or a link to somewhere i can learn how to accomplish this task
You don't need to use Heroku for a one page web app like yours. You can host it directly on Github using GitHub Pages. You want to choose the "Project Site" instructions. Pay attention to selecting the "source" when you go through the steps. You'll want to specify your html/main.html file.
Also, looking at your code, you should consider renaming your "java" folder either "js" or "javascript" or something similar. Java is a different language and naming it that is confusing.
You do not need to use NodeJS for this project because there is nothing that runs on the server side. Everything runs in the browser: the HTML, CSS, and JavaScript.
In order to deploy to heroku , you would have to use Node and you will have to set up a basic server to serve your static assets such as the HTML , Images, and CSS.
To deploy or host your app you can use Github pages, I am not sure if you have ever heard of it.
Here is this link: https://pages.github.com/
on the page it should have a step-by-step guide on how to host it.
So I noticed when I ran my react app's production build's login screen from create-react-app that all of the source code for the app was available within the static/js folder. Basically, the code doesn't look any different from the code in my ide, on the production build.
I am wondering if there is a way to hide this behind a login screen? So that a user can't directly access these files unless the login is successful. I have looked around and was unable to find anything of use.
The js files from the production build should be minimized which would look a lot different than in your IDE. I assume what looks "the same" is looking at the source using developer tools. The solution for that is to not deploy the source map files (*.js.map). Those are the files that allow developer tools to transform the minimized code back to its original look.
Removing source maps makes it difficult for someone to learn from the code easily, but if there is sufficient motivation to do so, it can still certainly be reverse-engineered. There are also some parts that wouldn't be obfuscated much at all such as the URLs for API calls which would then give someone a lot more information to use as the basis for hacking attempts.
If you need to prevent seeing any version of the source for people that are not logged in, I would recommend building your app as two apps -- one that just contains the login portion and one with the rest. Code-splitting within one app won't do the trick (at least not without using a solution that is quite a bit more complicated to manage than the two-app option), because it just makes the download process lazy and it is still pretty easy for someone to determine what the other files are and download them. However, even splitting this into two apps only helps if you host the second app differently. This will require server-side protection that only serves the JavaScript files for the second app for a user that is logged in. This means either using a different sub-domain for the second app or at least a different directory on the server that has those protections baked in. How you would implement that protection depends on the details of your authentication approach and the technology stack being used on the server. Most likely, it means using a cookie set by the login process and then having the JS files for the second app served up by something that verifies the cookie before allowing the JS files to be served to the browser.
To overcome displaying your source code in production's build, try to build your app with
GENERATE_SOURCEMAP=false npm run build