I'm trying to use Meteor.Collection.get(collection_name) (server side only) in production, it works well in development ; but as soon as I try to build my app with meteor --production, meteor throw
TypeError: Meteor.Collection.get is not a function
I suppose that Meteor.Collection.get was only made for debugging purposes (I can't find anything about it in the official documentation). Any idea how I can use it in production ?
I am not sure, where Meteor.Collection.get comes from in your code but I know the very reliable and long time battle proof dburles:mongo-collection-instances which allows you to retrieve a Mongo.Collection via it's name.
Add the package:
meteor add dburles:mongo-collection-instances
Create a collection:
// server/client
export const MyDocs = new Mongo.Collection('myDocs')
Get the collection:
// anywhere else
const MyDocs = Mongo.Collection.get('myDocs')
It works on the server and the client and runs fine in production.
Documentation: https://github.com/dburles/mongo-collection-instances
Edit: A note on --production
This flag is only there to simulate production minifaction. See the important message here in the docs: https://guide.meteor.com/deployment.html#never-use-production-flag
You should always use meteor build to build a production node app. More to read here: https://guide.meteor.com/deployment.html#custom-deployment
Related
I have written a few apps using svelte and sapper and thought I would give sveltekit a go.
All in all it works, but I am now running into the issue of registering a worker on ther server.
Basically I am trying to add socket.io to my app because I want to be able to send and receive data from the server. With sapper this wasn't really an issue because you had the server.js file where you could connect socket.io to the polka/express server. But I cannot find any equivalent in sveltekit and vite.
I experimented a bit and I can create a new socket.io server in a route, but that will lead to a bunch of new problems, such as it being on a separate port and causing cors issues.
So I am wondering is this possible with sveltekit and how do you get access to the underlying server?
The #sveltejs/adapter-node also builds express/polka compatible middleware which is exposed as build/middelwares.js which you can import into a custom /server.cjs:
const {
assetsMiddleware,
prerenderedMiddleware,
kitMiddleware,
} = require("./build/middlewares.js");
...
app.use(assetsMiddleware, prerenderedMiddleware, kitMiddleware);
The node adaptor also has an entryPoint option, which allows bundling the custom server into the build, but I ran into issues using this approach.
Adapters are not used during development (aka npx svelte-kit dev).
But using the svelte.config.js you're able to inject socket.io into the vite server:
...
kit: {
...
vite: {
plugins: [
{
name: "sveltekit-socket-io",
configureServer(server) {
const io = new Server(server.httpServer);
...
},
},
],
},
},
Note: the dev server needs to be restarted to apply changes in the server code.
You could use entr to automate that.
You cannot connect to a polka/express server because depending on the adapter you choose there can be no polka/express server used - if you deploy to a serverless platform for example. Sockets for serverless are not so easy to implement and their implementation depend on the provider.
You are raising an important concern but right now I'm afraid this is not possible - someone corrects me if I'm wrong.
What you still can do is to write your front with SvelteKit, build it as a static/SPA/node application and then use your build from your own polka/express server. You lose the swift development experience offered by SvelteKit though, since your development will be parted in two: first the client, then the server.
EDIT
You can also use a data-pusher third service. They are straightforward to use but not necessarily free. Here is a list of data-pusher services from the Vercel page:
Ably
Pusher
PubNub
Firebase Realtime Database
TalkJS
SendBird
Supabase
I’m trying to register a protocol handler using app.setAsDefaultProtocolClient and I’ve got it working fine on macOS but on windows 10 I get a dialog saying
Error launching app
Unable to find Electron app at 'C:\Program Files(x86)\Google\Chrome\Application\60.0.3….. Ect
Cannot find module 'C:\Program Files(x86)\Google\Chrome\Application\60.0.3….. Ect
Is it right that it’s looking in Chrome\Application folder? I get the same error if I run with npm start or from a packaged app using electron-packager.
Is there something i’m missing that i need to configure for windows? Like the plist on mac? I’ve been looking round but can’t seem to find anything. let me know any info i can add to help.
I had the same problem: the protocol handler doesn't find the location of the app in development environment on Windows. Everything works on OSX, and Windows only when packaged. The fix here is to manually supply the path to your app when registering the protocol.
Originally, I had something like this, which worked on OSX and packaged.exe on Windows:
if(!app.isDefaultProtocolClient('app')) {
app.setAsDefaultProtocolClient('app');
}
Here's the fix that corrected the path problem for developing on Windows:
// remove so we can register each time as we run the app.
app.removeAsDefaultProtocolClient('app');
// If we are running a non-packaged version of the app && on windows
if(process.env.NODE_ENV === 'development' && process.platform === 'win32') {
// Set the path of electron.exe and your app.
// These two additional parameters are only available on windows.
app.setAsDefaultProtocolClient('app', process.execPath, [path.resolve(process.argv[1])]);
} else {
app.setAsDefaultProtocolClient('app');
}
I had my project setup so that process.env.NODE_ENV would tell me if I'm in development environment or not. If you're in development environment, you want to pass in two additional parameters to app.setAsDefaultProtocolClient. The first argument, of course, is the protocol you want to register, the second argument should be the path to the electron executable. process.execPath is the default value, and should evaluate to /path/to/your/project/node_modules/electron/dist/electron.exe or similar.
The third argument is an array of arguments you want to run electron.exe with. In my case, I want to run my app, so I pass in the path wrapped in an array []. process.argv[1] is simply a way to get the path of the dev app, which should evaluate to /path/to/your/project/dist/electron/main.js or similar.
For more information: https://electronjs.org/docs/api/app#appsetasdefaultprotocolclientprotocol-path-args
I got a Meteor project from a friend who develops on MacOS.
When trying to run it, I get:
This project uses METEOR#1.0.2.1, which isn't available on Windows. To
work with this app on all supported platforms, use meteor update
--release METEOR#1.2.1 to pin this app to the newest Windows-compatible release.
When running it, I get:
While checking for cfs:gridfs#0.0.27: no compatible binary file
found...
Then, when I try to override (use run instead of update), without actually updating, it starts proxy and Mongo, but then breaks at, but skips the first error
While building package npm-container: error: No plugin known to handle
file '../../packages.json'. If you want this file to be a static
asset, use addAssets instead of addFiles; eg,
api.addAssets('../../packages.json', 'client').
I read that this error is fixed by updating meteorhacks, but when I try to, I get the Meteor version conflicts (see very first error) and I have no idea how to break out of the loop.
Can someone shine some light on how to fix any of these error?
There's no problem using the Meteor package strikeout:string.js on the client side (browser JS console), but it throws an error when using it on the server side.
Checked package.js and found api.addFiles('lib/string.js', ['client','server']);, is this not sufficient?
Test code
console.log(S('jon').capitalize().s)
Error on server
ReferenceError: S is not defined
is this not sufficient? YES, you are getting the Reference because you are not requiring it.
In order to use it on the server, you should require it, on this example im using meteorhacks:npm.
It was not possible for me create a Meteorpad of this, so i will do here step-by-step.
First meteor add meteor hacks:npm
Second On the recent create packages.json add this line
{
"string": "3.1.0"
}
Third Now just add the server code.
if (Meteor.isServer) {
Meteor.startup(function () {
var S = Meteor.npmRequire('string'); //server side
console.log(S('jon').capitalize().s)
});
}
Expected Output
I20150326-10:54:05.639(-5)? Jon
Hope it works for you.
I'm really enjoying learning about web development with Parse.com. I have a cloud app that serves jade templates and a few cloud functions that I'd like to call from .js in the browser.
I'm trying to setup for development and production using the parse docs here, but I've become confused. It's my understanding that I'll have one source tree on my development machine, but two parse applications that I'll deploy to alternatively as development and production.
It seems using the command line parse add <alias> will add credentials to my config/global.json file, but what about my statically served .js files that need to make cloud calls? They start out:
Parse.$ = jQuery;
Parse.initialize("my app id", "my app js key");
If I have only one code repository, I'll have to touch these keys before I deploy to production. That can't be right, can it? If I forget, I'll deploy a broken app. Am I mixed up, or is this just something I must deal with?
For a given session you only need to initialize Parse once. This means that you can do this when the browser loads from a single location.
You could create some sort of build script that modifies the keys.
Alternatively, on load, make a call to a seperate service which holds your keys and which returns the correct key depending on your environment.
In case anyone else has this problem, here's what I did (thanks to #Kenneth for suggesting). The script first checks to see if git has any un-staged changes. It refuses to run unless I've checked in all the changes.
Then it replaces all my dev ids/keys in .js files with production versions, deploys to my parse production app and finally restores .js files to contain their development keys...
#!/bin/bash
if git diff-index --quiet HEAD --; then
echo 'Replacing app id and js keys with production keys'
sed -i '' 's/my-development-app-id/my-production-app-id/g' ./public/*.js
sed -i '' 's/my-development-js-key/my-production-js-key/g' ./public/*.js
parse deploy production
echo 'Changing back to development keys'
git checkout *.js
else
echo 'Must commit all changes before deploying to production'
fi
Similarly, to separate our environments we deployed a Parse app for each one needed (say dev, qa, prod) and used the different resulting urls (the subdomain, but really any different part can do) to tell them apart and discover our environment in the code. We then stored the environment in an attribute.
var APP_ID, JS_KEY;
switch(location.host.split(".")[0]){ //Figure out environment off of the url (subdomain here)
case 'myappprod': //ex: myappprod.parseapp.com
MyApp.env = 'prod'
APP_ID = 'theprodappid';
JS_KEY = 'theprodjskey';
break;
case 'myappqa':
MyApp.env = 'qa'
APP_ID = 'theqaappid';
JS_KEY = 'theqajskey';
break;
default: //otherwise dev
MyApp.env = 'dev'
APP_ID = 'thedevappid';
JS_KEY = 'thedevjskey';
break;
}
You can also hint at the environment (app) you want to use in your local setup using this same technique. Just have the virtual host you use with your web server match all three local urls. For example, with nginx:
server_name myappdev.parseapp.dev myappqa.parseapp.dev myappprod.parseapp.dev;