I'm using the phantomjs extension using https://github.com/iradul/vscode-phantomjs-debug, which is great for getting me into the main module but I can't debug into modules in the /modules directory.
The project is setup as:
\parent
--main.js
\parent\modules
--processing.js
processing.js is pulled into main.js in the usual manner with a require at the top of main.js
I'm sure there is a setting in the .vscode/launch.json that would get me into the processing.js file, I'm just not sure what it is.
processing.js is where I handle page.onXXX functions such as page.onError, page.onResourceRequested, and page.onResourceRecieved.
This isn't a typescript project - straight js.
Since Phantom's remote debugger is currently not resolving full paths of loaded modules VS debugger can only guess where they are. You'll have to place your modules inside node_modules directory. Also you'll have to put break-point inside your main module somewhere after require call for your modules.
Take a look at this for more info:
https://github.com/iradul/vscode-phantomjs-debug/wiki/Debug-modules
Related
I have a project that uses source files external to the project. Effectively, there is the actual project source code (an Typescript/Angular 2 application, lets call it the 'core' stuff), and this is a generic web application that is meant to be the base code that consumes these external source files.
The external files include additional stuff-- that could be SCSS files, images, evn additional JS. The way I want this to work is that webpack copies these external files from any source directory (this is critical, it is not part of the core project) to a local .tmp directory. The files in the .tmp directory are worked on along with the core src files to generate the prod output.
I can't figure out how to add these additional external source files to the watch list. Effectively what I'm looking to do is watch that directory and as things change, it re-copies the affected files to the local .tmp directory and triggers a recompile.
Presently I have to restart webpack and have a very very ugly solution using Grunt to watch the additional files. It's nasty but these kinds of workarounds have historically been what I've had to do with webpack.
Does anyone have a better solution? Ideally I'd like to not have to mix grunt with webpack. Webpack should be able to do this, but its hard to know whether there's an existing plugin for this or what the best approach would be.
Also, please spare the "look for it on google" or "read the docs" comments. I've combed through it all, hard, and have not found anything.
Thanks in advance.
As of now Webpack doesn't watch external files out-of-the-box . You need a plugin for that.
Basically idea is to have a file watcher module chokidar / watch , listening to the file change , and when there is a change, restart the webpack compilation phase . Webpack plugins can access the compilation object and we you need to hook it to a compiler phase i.e. 'emit' , 'after-emit' etc.
This Webpack plugin exactly solves your problem - https://www.npmjs.com/package/filewatcher-webpack-plugin .
we are in the process of uplifting parts of a huge code base. We are bringing in a module which has been built with webpack. In order to avoid code duplication we have used webpacks externals option.
When we start to integrate our module into the main codebase which is currently using browserify, we have an issue where a shared dependency is included twice and causes issues.
Is there are a way to have webpack use the packaged version of the dependency? So in the final browserified bundle we just have the dependency included once?
It seems to me like this may not be possible, if so I will push for moving the rest of our codebase onto webpack (it is underway already).
The only solution I have come up with so far is to have the webpack module also export the shared dependency, and then have the main app use that export, but this is not ideal.
I have ensured that the dependency in both node_modules folders are at the same version and I still get 2 instances.
I need to be able to tell Browserify to only resolve my apps node_modules, or tell it to resolve from top to bottom, i.e. look in the top level node_modules first, is this possible?
I have tried setting the NODE_PATH option when using the cli to no affect.
** update **
So the issue is that when Browserify hits the require() statement in the webpack bundle it resolves from the local node_modules folder, so we end up with 2 instances of the dependency. I can fix this by using a relative path in either the apps require() or webpacks external option and ensuring they use the same file.
So it seems the issue was caused by the fact that I had symlinked (with npm link), the module as I was working on this also. It seems that when Browserify resolves the require in the symlinked module it resolves to its own node_modules, and we end up with 2 copies.
When I install the module normally it all works fine, so this is OK as other consumers of the module should have no issues. It is annoying that it behaves this way, but I just need to ensure that I point at the same dependency in my main app when developing alongside the module.
So my require statement in the main app (while symlinking) looks something like this:
require('./node_modules/my-module/node_modules/shared-dependency/index.js');
I can require as normal when not symlinking.
I'm developing a JS app http://myclibu.com which shares code between Node.js on the server and code in the Browser. And I'm migrating from using require.js to jspm.io for module loading in the Browser.
My ideal folder structure is
app -|- browser
|- server
|- shared
However so far I can't get config.paths in JSPM to work with files outside of the browser folder (outside of baseURL) which means I need to copy shared into browser to get files to load.
Is there any way I can use the folder structure as shown above? Note that I'm developing on Windows, in case that matters.
I am playing with the yoman trying to build a web site using the webapp generator.
If Managed to create a web site that works under grunt server, when I change a js file grunt notices the change change and does a live load and everything works as you would expect.
When I try a plan grunt, it attempts to run the dist task, it manages to include my html files, but skips any of the javascript or script files I created in the script and styles diretores. I assume its the case I have to tell grunt to includes these files
Files such as main.js seem to make it through, but there are no references to main.js in the Gruntfile, so I not sure which part of Gruntfile.js to change.
Doing a yo doctor reports
[Yeoman Doctor] Everything looks all right!
Q. How to do I tell grunt to include and user created files.
Q. I noticed that all my image files where renamed, fair enough how do I refer to a file that I known is going to be renamed in a javascript file
Q. Does anybody known a good web resource for yoman where these quesion might have already been answered?
Be careful on this glob pattern scripts/{,*}/*.js. This takes only the js files that are inside scripts or immediate child folders.
Make sure to change it to scripts/**/*.js to include all js files in all subfolders.
Also get an idea on tags build: css, build: js in your index.html, wiredep plugin used by Yeoman in gruntfile to understand what files will be injected into dist folder.
My question is partly technical and partly about deployment strategies and workflow. I built a project using Require JS. It includes a number of distinct js modules, and is built upon Kirby CMS. The directory structure of the project is something like this:
project
assets
styles
style.css
js
scripts
script1.js
script2.js
script3.js
vendor
app.js
images
fonts
content
...
kirby folders
....
The file app.js is called in the footer of my site's page like so:
<script data-main="/assets/js/app" src="/assets/js/vendor/require.js"></script>
It configures RequireJS by calling the requirejs.config() function and then calls the main script file that loads everything else using RequireJS's requirejs() function.
I've used RequireJS' s optimization tool to compile the project in such a way that the optimized files are all dumpted into a directory called dist (a name I just picked up from this tutorial). So in the end dist contains a replication of every directory and file under assets, only optimized, and the file app.js is a concatenated and optimized version of all the js modules that I have in the project. So far so good.
What I am unsure about, however, is how I'm the supposed to make use of this new secondary version of all the code. What for instance if I want to deploy a version of the site to the production server without all the source js files? Each time I deploy the site, I would need to go through my code and in every place that I referred to files under the assets directory, I would need to replace that with dist. I deploy using git and beanstalk. One way to do this would be to manage different branches for staging, production, and development, in which the production and perhaps staging branches have references to the files under dist, but this seems awkward.
So my question is given this kind of optimization set up, which if you look at the tutorial linked above is one way to do this, how then do you manage the switch to the optmized version of everything seemlessly, without having to go back into your code and change everything up? Is there some key part of the process that I'm missing here?
Each time I deploy the site, I would need to go through my code and in every place that I referred to files under the assets directory, I would need to replace that with dist.
I've looked at the tutorial you've linked to and do not see how it is true for the tutorial. The tutorial does not use absolute paths, so should be deployable from dist just as well as from the directory that contains the pre-optimization sources. If you cannot do this for your application, that's because you've done something different from the tutorial. Your script tag, for instance, shows absolute paths.
So the solution is to design your application to avoid absolute paths. This way, you won't have to change paths when you deploy from dist. I'm using this very method to deploy optimized and non-optimized versions of one of my apps.