I have a web app that I'm building that has dynamic widgets being constructed on the client-side. Currently I am using nodejs and pug as my server side templating library, and I like the simplicity of pug.
I would like to have a series of small pug files on the server that the client side can use as building blocks to construct the user desired widget.
I tried using a previous solution found here:
client side server side templating nodejs
However, that solution looks like overkill for what I want. Moreover, it looks like the ezel project is no longer maintained, it hasn't been updated in 2 years and it still uses jade (which npm gives me a lot of errors).
I just want to be able to construct my dynamic widgets in pug in the browser. This page seems to have exactly what I want:
https://pugjs.org/api/reference.html
Specifically the pug.renderFile('path/to/file.pug', options); function seems like exactly what I want to use to dynamically build my widgets (the user has all the controls on how the widgets are constructed/displayed, so the browser needs to dynamically construct the html views)
My issue is the dependence on:
https://pugjs.org/js/pug.js
And the need to do require('pug') in the browser. I already have pug installed as part of my package.json. Is there a more robust way of getting pug.js directly? I am still new to web development, my background is in C++/Java, so I'm not entirely sure if using pug.js in the browser directly is the best solution or if there are better standard solution. The stackoverflow question I posted is the only post I came across that is remotely similar.
I researched and tested a solution that I really like. NPM has a cool package called pug-cli.
https://www.npmjs.com/package/pug-cli
I modified my npm start script to do the following:
pug -c -w --name-after-file -o public/js/views views/client/
What this allows me to do is write my client views in put in the views/client folder. A task is running in the background that monitors changes in views/client/. Upon changes, it complies .pug files from views/client/ folder into javascript and saves it into public/js/views/. Then in the client code you just include Template.js and call Template(parameters) in your js. There is no need for a pug.js on client side. This is with debugging, to turn debug off, run with -D
For instance, views/client/example.pug will get automatically complied to public/js/views/exampleTemplate.js
Then all you have to do in the client is include this js file, and call exampleTemplate(params) to get your templated string (you can call this arbitrarily with different parameters get different views). This allows me to arbitrarily/dynamically compose and construct views from the client, when the views are unknown on the server side.
I like this approach for my workflow, but I am open to better suggestions.
If you use webpack:
https://github.com/pugjs/pug-loader and
https://github.com/willyelm/pug-html-loader serve well.
In case of rollup:
https://www.npmjs.com/package/rollup-plugin-pug + https://www.npmjs.com/package/rollup-plugin-pug-html seem to be good solution (currently testing how it works, we are now experimenting with native es6 modules and backuping bundles with rollup)
In case of browserify:
https://www.npmjs.com/package/jadeify (never have tried)
Also pug-cli have got -c key, so you can just run any watcher and generate js files as mentioned above, but it seems to be bit too straightforward since we've got variety of bundling tools now in 2017.
Related
My primary question is whether or not the 'config' should be in a separate JSON/rc format, or if a general withCustomConfig-like function makes more sense. I realize this may be a matter of opinion, but I'm interested in hearing what more experienced developers think, as my searches for more information on this topic are coming up completely empty and I'm sure there's a preferred/standard way of handling something like this.
Essentially, I'm building an npm package for the generation/parsing of markdown for a specific use-case. The package allows for custom configurations to the characters utilized in the structure of the markdown itself. Originally, I was just intending on having users apply changes to the config via a simple withCustomConfig function (that accepts a config object); however, I started wondering if setting it via a JS function was not ideal - as the config's state then generally appears to seem more transient/loosely-defined than a completely separate rc or JSON config. I realize that is a matter of perception, but I think many people would share the same sentiment. And this poses a large risk because any changes to the configuration post-production would result in existing markdown becoming unparseable.
So, I started looking into having users configure a separate JSON config file. In order to import that into the browser, I realized I'd have to parse the JSON and convert it into a JS module before the user builds their client side application, which was relatively easy thanks to packages like fs. I set-up a small CLI utility to regenerate the JS module and added a postinstall script which parses the config file.
Then I started wondering if I was over-engineering this library, considering it's bound to be a relatively small aspect in its users' application. I started wondering if it even really warranted a completely separate config file and all the logic associated with parsing that for the browser. In general, would a simple withCustomConfig function actually be preferred/more-standard, or would an rc file be preferred by most people in this use case?
I have thoroughly searched the web about creating a website using React as a front end and PHP as a backend for like a week now. I have found many solutions about configuring webpacks and stuffs. But most of them only aim for index.html. So, I decided to create one LARGE react app and used CDNs of React, ReactDOM, and Babel to run it on index.php which is running on XAMPP. The main reason for creating one large app is because I cannot use full functionality of creating components and importing them.
But now, I want to use MDBootstrap, and its React components. But I cannot use them since importing is not available. I have watched tutorials and read articles about webpack and configured it. But all those were for index.html. And lastly, I have also found tutorials using PHP, XAMP(MySQL), and React. However, most of them run on Node server rather than XAMP server.
So I want to create something like
Website
react_app
index.php
where react_app is created by create-react-app or does have a same functionality as it like imports and stuffs. I do know that NPM server runs index.html from /build/, but I do want to run it on XAMP server using index.php.
Here is a link you can follow to do what you are asking for https://medium.com/#MartinMouritzen/how-to-run-php-in-node-js-and-why-you-probably-shouldnt-do-that-fb12abe955b0
and a possible other answer here Execute PHP scripts within Node.js web server
Only thing is, it is not really advisable to do this. It carries a number of security risks. I would advice you learn to seperate concerns and follow the Model-View-Controller (MVC) design attached to React.
You can still have React as your view while PHP is in the back. requests are sent and received through an api call. Check out this simple tutorial on achieving this https://blog.bitsrc.io/how-to-build-a-contact-form-with-react-js-and-php-d5977c17fec0
I hope this helps you with what you need. If it doesn't, let me know. glad to help!
I’ve just got started with Swagger and NodeJS. I was able to implement Swagger to my NodeExpress application and was also able to generate typescript-client-code with Swagger-Codegen (Typescript-Angular) to be exact.
One problem that I have is the generated code is so spread out many different files. I was hoping that it only output one file api.ts and it contains everything from API calls and interfaces/models.
I’ve been looking for a way to solve this problem because it is hard to read and maintain the generated-client-code as the backend grows.
Any suggestions or pointers would be much appreciated.
Happy Holiday! Thank you
EDIT: I have been looking for answers for this for a couple of days and still haven't found one. I'm currently working on a project with ASP.NET Core and they have NSwag which does what I want to achieve with Node Swagger.
TL; DR
You may compile all files into a single *.ts file as shown at this post.
Continuous Integration Approach
Swagger code generator simplifies maintenance because it allows you to think in terms of continuous integrations.
You should not be worried about code review or aesthetics (because it is a machine generated code), but about:
API versioning
Functions, methods and classes signatures
Documentation
If you are working with a CI system such as Jenkins or Ansible, you could automatically deploy the library to a private NPM account (for JS and TS) or Maven server (for Java and Kotlin).
Keeping the package version number consistently updated will allow the IDE to correctly prompt the user about updates on the API.
Swagger uses Mustache templates for generating the code. For making simpler
changes you can simple create a copy of one of the built-in templates and
modify that.
Then you can use your modified template like this:
swagger-codegen-cli generate -t path/to/template/dir/ -i spec.json
The output directory structure, however, cannot be changed using templates
alone. For that you'd need a custom codegen module. You can either create your
own or modify one of the built-in ones.
I'm very keen to make use of some build techniques in my Javascript/Web App development such as
Concatentation
Minification
Image replacement with data:uri's
Build vs Source *
App Cache Manifest generation *
It's those last two that I haven't found an answer for yet.
Build vs Source
By this I mean having a "source" version of my HTML and Javascript that is untouched so that I do not have to build each time to preview a change. All of my JS files are separate <script> tags as usual with the build vs updating these script sections with the final concatenated versions. To be honest I feel like I'm missing something here with all of these new Javascript build systems as this seems like an obvious need but I can't find anyone else talking about it. How is everyone else dealing with this?.. Build on each change during development?? surely not.
App Cache Manifest generation
This explains itself - walk through my source tree and build up a manifest and insert it into my <html> tag.
I've searched for these two with no luck - any pointers?
I'd be on the road with a killer build system if it wasn't for those two.
Thanks!!!
Re: Build vs Source
It sounds like you're already familiar with grunt. You may want to consider looking into the grunt node-build-script plugin.
It adds a number of new tasks, notably grunt mkdirs and grunt copy which duplicates your project directory into a separate staging folder and then copies your optimised project into a publish folder. If I'm not mistaken, this is what you mean by keeping an 'untouched' version of your source files?
Running grunt server will then serve up the contents of your publish files on localhost. You could always point your web server to your initial project directory if you want to examine your application in its unoptimised state.
node-build-script adds a bunch of other super convenient tasks, such as image optimisation, automatic file revving and substitution. It's incredibly easy to use and super customisable.
I have a basic single page template which uses node-build-script which also may be of interest.
Re: App Cache Manifest generation
I believe this used to be part of node-build-script but was since removed, see 1, 2
There would be nothing stopping you from creating a custom grunt task that utilised something like confess.js however.
Finally, it looks like Google's upcoming Yeoman might be worth keeping an eye on if you're not already!
I'm building a purely client side JavaScript based web app, and am looking to optimize the workflow for switching to CDN URLs for the JavaScript libraries I use on the production server.
In order to be able to do work offline, my laptop development machine loads all the libraries from a /js folder on a local web server. When I deploy the app, I want to substitute these URLs to use CDN versions of the jQuery library on Google e.g.. Since there's no server side logic, I can't make a check there for something like Rails.env.production? like I would if this were a Rails app.
I'm deploying by pushing to a git repo on the production machine and running a post-receive hook. I imagine I could run some kind of sed routine that switches URLs over the update in the same post-receive script, but am curious if there's not maybe a more elegant solution.
The easiest thing would be to simply put client side logic into the app to check what hostname it was called form, but I'd like to keep that as a last resort.
There is a previous discussion on fallback loading here, but a broader sense my question is about the automated swapping out of a block of text for another when deploying to a production machine.
You want to commit the change that has the pointer to the CDN version on the branch from which you will be publishing.
Commit the change that will have the development version to your development branch.
Assuming that this is the only way your branches differ, merge with "ours" strategy on both branches.
git merge -s ours mainbranch
This will ensure that you don't get that code change from now on wherever you merge or rebase.
hope this helps.
You could simply make the CDN host work locally by adding it to your /etc/hosts.
If that's not an option, use inline JavaScript to add the <script src="..." tags to the document and put some logic to decide whether to use CDN or not into that script.